Mar 20 08:33:12.555959 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 20 08:33:13.222755 master-0 kubenswrapper[4041]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:33:13.222755 master-0 kubenswrapper[4041]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 08:33:13.222755 master-0 kubenswrapper[4041]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:33:13.222755 master-0 kubenswrapper[4041]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:33:13.222755 master-0 kubenswrapper[4041]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 08:33:13.222755 master-0 kubenswrapper[4041]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:33:13.223745 master-0 kubenswrapper[4041]: I0320 08:33:13.223365 4041 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227114 4041 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227144 4041 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227150 4041 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227155 4041 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227159 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227166 4041 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227171 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:33:13.227160 master-0 kubenswrapper[4041]: W0320 08:33:13.227176 4041 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227181 4041 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227186 4041 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227191 4041 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227195 4041 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227200 4041 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227206 4041 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227211 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227215 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227220 4041 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227239 4041 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227244 4041 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227248 4041 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227252 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227257 4041 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227275 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227279 4041 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227284 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227288 4041 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227292 4041 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:33:13.227535 master-0 kubenswrapper[4041]: W0320 08:33:13.227296 4041 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227301 4041 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227305 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227310 4041 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227314 4041 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227319 4041 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227324 4041 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227329 4041 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227333 4041 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227337 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227341 4041 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227346 4041 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227350 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227356 4041 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227361 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227366 4041 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227370 4041 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227377 4041 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227384 4041 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227389 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:33:13.227970 master-0 kubenswrapper[4041]: W0320 08:33:13.227394 4041 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227399 4041 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227404 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227408 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227415 4041 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227421 4041 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227426 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227431 4041 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227436 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227442 4041 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227446 4041 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227451 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227456 4041 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227461 4041 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227466 4041 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227470 4041 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227475 4041 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227480 4041 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227484 4041 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:33:13.228401 master-0 kubenswrapper[4041]: W0320 08:33:13.227489 4041 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: W0320 08:33:13.227495 4041 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: W0320 08:33:13.227499 4041 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: W0320 08:33:13.227504 4041 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: W0320 08:33:13.227511 4041 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: W0320 08:33:13.227515 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228557 4041 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228574 4041 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228586 4041 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228594 4041 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228601 4041 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228606 4041 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228614 4041 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228621 4041 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228627 4041 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228633 4041 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228639 4041 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228645 4041 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228650 4041 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228656 4041 flags.go:64] FLAG: --cgroup-root="" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228661 4041 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228667 4041 flags.go:64] FLAG: --client-ca-file="" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228673 4041 flags.go:64] FLAG: --cloud-config="" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228678 4041 flags.go:64] FLAG: --cloud-provider="" Mar 20 08:33:13.228891 master-0 kubenswrapper[4041]: I0320 08:33:13.228683 4041 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228690 4041 flags.go:64] FLAG: --cluster-domain="" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228695 4041 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228710 4041 flags.go:64] FLAG: --config-dir="" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228716 4041 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228722 4041 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228730 4041 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228736 4041 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228743 4041 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228749 4041 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228755 4041 flags.go:64] FLAG: --contention-profiling="false" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228761 4041 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228766 4041 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228773 4041 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228785 4041 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228792 4041 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228798 4041 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228804 4041 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228809 4041 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228815 4041 flags.go:64] FLAG: --enable-server="true" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228820 4041 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228828 4041 flags.go:64] FLAG: --event-burst="100" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228834 4041 flags.go:64] FLAG: --event-qps="50" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228839 4041 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228845 4041 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 08:33:13.229645 master-0 kubenswrapper[4041]: I0320 08:33:13.228850 4041 flags.go:64] FLAG: --eviction-hard="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228858 4041 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228864 4041 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228870 4041 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228876 4041 flags.go:64] FLAG: --eviction-soft="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228881 4041 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228886 4041 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228892 4041 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228897 4041 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228902 4041 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228907 4041 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228913 4041 flags.go:64] FLAG: --feature-gates="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228920 4041 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228925 4041 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228931 4041 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228937 4041 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228942 4041 flags.go:64] FLAG: --healthz-port="10248" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228948 4041 flags.go:64] FLAG: --help="false" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228953 4041 flags.go:64] FLAG: --hostname-override="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228959 4041 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228967 4041 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228975 4041 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228980 4041 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228986 4041 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228992 4041 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 08:33:13.230446 master-0 kubenswrapper[4041]: I0320 08:33:13.228997 4041 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229002 4041 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229008 4041 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229013 4041 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229019 4041 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229024 4041 flags.go:64] FLAG: --kube-reserved="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229030 4041 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229035 4041 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229041 4041 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229046 4041 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229051 4041 flags.go:64] FLAG: --lock-file="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229056 4041 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229062 4041 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229067 4041 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229076 4041 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229081 4041 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229087 4041 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229092 4041 flags.go:64] FLAG: --logging-format="text" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229097 4041 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229104 4041 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229109 4041 flags.go:64] FLAG: --manifest-url="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229114 4041 flags.go:64] FLAG: --manifest-url-header="" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229122 4041 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229127 4041 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229134 4041 flags.go:64] FLAG: --max-pods="110" Mar 20 08:33:13.231054 master-0 kubenswrapper[4041]: I0320 08:33:13.229140 4041 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229145 4041 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229150 4041 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229158 4041 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229164 4041 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229169 4041 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229174 4041 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229188 4041 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229194 4041 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229199 4041 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229206 4041 flags.go:64] FLAG: --pod-cidr="" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229211 4041 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229221 4041 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229226 4041 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229232 4041 flags.go:64] FLAG: --pods-per-core="0" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229237 4041 flags.go:64] FLAG: --port="10250" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229243 4041 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229248 4041 flags.go:64] FLAG: --provider-id="" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229253 4041 flags.go:64] FLAG: --qos-reserved="" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229275 4041 flags.go:64] FLAG: --read-only-port="10255" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229282 4041 flags.go:64] FLAG: --register-node="true" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229287 4041 flags.go:64] FLAG: --register-schedulable="true" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229292 4041 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 08:33:13.231580 master-0 kubenswrapper[4041]: I0320 08:33:13.229302 4041 flags.go:64] FLAG: --registry-burst="10" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229307 4041 flags.go:64] FLAG: --registry-qps="5" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229314 4041 flags.go:64] FLAG: --reserved-cpus="" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229319 4041 flags.go:64] FLAG: --reserved-memory="" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229326 4041 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229332 4041 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229338 4041 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229344 4041 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229349 4041 flags.go:64] FLAG: --runonce="false" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229354 4041 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229360 4041 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229365 4041 flags.go:64] FLAG: --seccomp-default="false" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229371 4041 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229380 4041 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229386 4041 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229427 4041 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229433 4041 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229439 4041 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229444 4041 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229450 4041 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229455 4041 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229461 4041 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229466 4041 flags.go:64] FLAG: --system-cgroups="" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229473 4041 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229482 4041 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 08:33:13.232038 master-0 kubenswrapper[4041]: I0320 08:33:13.229488 4041 flags.go:64] FLAG: --tls-cert-file="" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229492 4041 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229499 4041 flags.go:64] FLAG: --tls-min-version="" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229505 4041 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229510 4041 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229515 4041 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229521 4041 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229526 4041 flags.go:64] FLAG: --v="2" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229534 4041 flags.go:64] FLAG: --version="false" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229541 4041 flags.go:64] FLAG: --vmodule="" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229549 4041 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: I0320 08:33:13.229555 4041 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229686 4041 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229694 4041 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229700 4041 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229705 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229710 4041 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229715 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229722 4041 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229728 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229736 4041 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:33:13.232649 master-0 kubenswrapper[4041]: W0320 08:33:13.229742 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229748 4041 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229753 4041 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229759 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229763 4041 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229769 4041 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229774 4041 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229778 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229783 4041 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229787 4041 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229792 4041 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229797 4041 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229801 4041 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229808 4041 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229812 4041 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229817 4041 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229822 4041 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229826 4041 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229831 4041 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:33:13.233070 master-0 kubenswrapper[4041]: W0320 08:33:13.229835 4041 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229840 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229844 4041 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229848 4041 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229853 4041 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229859 4041 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229864 4041 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229869 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229873 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229877 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229882 4041 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229886 4041 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229892 4041 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229897 4041 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229901 4041 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229906 4041 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229910 4041 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229914 4041 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229917 4041 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229922 4041 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:33:13.233530 master-0 kubenswrapper[4041]: W0320 08:33:13.229927 4041 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229931 4041 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229935 4041 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229940 4041 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229944 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229949 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229953 4041 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229958 4041 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229964 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229969 4041 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229976 4041 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229982 4041 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229987 4041 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229992 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.229996 4041 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.230001 4041 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.230006 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.230010 4041 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.230015 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.230020 4041 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:33:13.233958 master-0 kubenswrapper[4041]: W0320 08:33:13.230024 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:33:13.234390 master-0 kubenswrapper[4041]: W0320 08:33:13.230029 4041 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:33:13.234390 master-0 kubenswrapper[4041]: W0320 08:33:13.230033 4041 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:33:13.234390 master-0 kubenswrapper[4041]: W0320 08:33:13.230038 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:33:13.234390 master-0 kubenswrapper[4041]: I0320 08:33:13.230050 4041 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:33:13.241513 master-0 kubenswrapper[4041]: I0320 08:33:13.241342 4041 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 20 08:33:13.241513 master-0 kubenswrapper[4041]: I0320 08:33:13.241390 4041 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 08:33:13.241642 master-0 kubenswrapper[4041]: W0320 08:33:13.241620 4041 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:33:13.241642 master-0 kubenswrapper[4041]: W0320 08:33:13.241639 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241650 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241659 4041 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241668 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241677 4041 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241685 4041 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241693 4041 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241702 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241712 4041 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241719 4041 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241727 4041 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:33:13.241726 master-0 kubenswrapper[4041]: W0320 08:33:13.241737 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241746 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241755 4041 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241763 4041 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241771 4041 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241778 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241789 4041 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241800 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241808 4041 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241816 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241824 4041 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241832 4041 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241839 4041 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241847 4041 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241855 4041 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241863 4041 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241871 4041 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241878 4041 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241886 4041 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241894 4041 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:33:13.242057 master-0 kubenswrapper[4041]: W0320 08:33:13.241902 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241909 4041 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241919 4041 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241927 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241935 4041 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241943 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241951 4041 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241958 4041 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241969 4041 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241982 4041 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.241992 4041 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242001 4041 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242012 4041 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242020 4041 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242032 4041 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242040 4041 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242049 4041 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242058 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:33:13.242666 master-0 kubenswrapper[4041]: W0320 08:33:13.242066 4041 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242075 4041 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242083 4041 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242091 4041 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242099 4041 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242108 4041 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242116 4041 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242124 4041 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242132 4041 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242139 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242147 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242155 4041 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242163 4041 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242171 4041 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242179 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242188 4041 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242196 4041 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242204 4041 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242211 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242219 4041 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:33:13.244592 master-0 kubenswrapper[4041]: W0320 08:33:13.242229 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242237 4041 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: I0320 08:33:13.242250 4041 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242521 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242533 4041 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242543 4041 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242551 4041 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242559 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242568 4041 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242579 4041 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242589 4041 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242598 4041 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242607 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242616 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242624 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:33:13.245189 master-0 kubenswrapper[4041]: W0320 08:33:13.242633 4041 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242640 4041 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242648 4041 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242658 4041 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242668 4041 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242677 4041 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242687 4041 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242696 4041 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242706 4041 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242714 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242724 4041 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242734 4041 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242743 4041 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242752 4041 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242760 4041 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242768 4041 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242776 4041 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242784 4041 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242792 4041 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:33:13.245709 master-0 kubenswrapper[4041]: W0320 08:33:13.242799 4041 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242807 4041 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242817 4041 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242825 4041 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242833 4041 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242841 4041 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242848 4041 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242857 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242864 4041 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242874 4041 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242883 4041 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242891 4041 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242899 4041 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242908 4041 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242916 4041 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242924 4041 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242931 4041 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242939 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242947 4041 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:33:13.246474 master-0 kubenswrapper[4041]: W0320 08:33:13.242955 4041 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.242963 4041 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.242972 4041 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.242980 4041 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.242988 4041 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.242996 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243004 4041 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243012 4041 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243019 4041 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243027 4041 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243035 4041 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243043 4041 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243052 4041 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243060 4041 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243068 4041 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243076 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243083 4041 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243091 4041 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243099 4041 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.243159 4041 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:33:13.247044 master-0 kubenswrapper[4041]: W0320 08:33:13.244606 4041 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:33:13.247649 master-0 kubenswrapper[4041]: W0320 08:33:13.244624 4041 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:33:13.247649 master-0 kubenswrapper[4041]: I0320 08:33:13.244646 4041 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:33:13.247649 master-0 kubenswrapper[4041]: I0320 08:33:13.246452 4041 server.go:940] "Client rotation is on, will bootstrap in background" Mar 20 08:33:13.250299 master-0 kubenswrapper[4041]: I0320 08:33:13.250239 4041 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 20 08:33:13.251603 master-0 kubenswrapper[4041]: I0320 08:33:13.251566 4041 server.go:997] "Starting client certificate rotation" Mar 20 08:33:13.251655 master-0 kubenswrapper[4041]: I0320 08:33:13.251608 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 20 08:33:13.253456 master-0 kubenswrapper[4041]: I0320 08:33:13.253363 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 08:33:13.278997 master-0 kubenswrapper[4041]: I0320 08:33:13.278934 4041 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:33:13.283943 master-0 kubenswrapper[4041]: I0320 08:33:13.283855 4041 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:33:13.284554 master-0 kubenswrapper[4041]: E0320 08:33:13.284470 4041 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:13.299543 master-0 kubenswrapper[4041]: I0320 08:33:13.299467 4041 log.go:25] "Validated CRI v1 runtime API" Mar 20 08:33:13.306140 master-0 kubenswrapper[4041]: I0320 08:33:13.306075 4041 log.go:25] "Validated CRI v1 image API" Mar 20 08:33:13.309735 master-0 kubenswrapper[4041]: I0320 08:33:13.309642 4041 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 08:33:13.314256 master-0 kubenswrapper[4041]: I0320 08:33:13.314194 4041 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 8bd1c714-85b3-42d8-843c-32eb4beee773:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 20 08:33:13.314256 master-0 kubenswrapper[4041]: I0320 08:33:13.314239 4041 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 20 08:33:13.342212 master-0 kubenswrapper[4041]: I0320 08:33:13.341783 4041 manager.go:217] Machine: {Timestamp:2026-03-20 08:33:13.33879158 +0000 UTC m=+0.589137155 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:68fa82f9afdb4f4db7851aefd1680b64 SystemUUID:68fa82f9-afdb-4f4d-b785-1aefd1680b64 BootID:8450f042-88d6-4841-ac46-8e16fb0e4c12 Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:64:4a:87 Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:1e:1b:15:bf:6f:99 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 08:33:13.342212 master-0 kubenswrapper[4041]: I0320 08:33:13.342139 4041 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 08:33:13.342490 master-0 kubenswrapper[4041]: I0320 08:33:13.342378 4041 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 08:33:13.345351 master-0 kubenswrapper[4041]: I0320 08:33:13.345249 4041 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 08:33:13.345696 master-0 kubenswrapper[4041]: I0320 08:33:13.345641 4041 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 08:33:13.346023 master-0 kubenswrapper[4041]: I0320 08:33:13.345687 4041 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 08:33:13.346106 master-0 kubenswrapper[4041]: I0320 08:33:13.346055 4041 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 08:33:13.346106 master-0 kubenswrapper[4041]: I0320 08:33:13.346074 4041 container_manager_linux.go:303] "Creating device plugin manager" Mar 20 08:33:13.346302 master-0 kubenswrapper[4041]: I0320 08:33:13.346234 4041 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 08:33:13.346364 master-0 kubenswrapper[4041]: I0320 08:33:13.346327 4041 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 08:33:13.346626 master-0 kubenswrapper[4041]: I0320 08:33:13.346588 4041 state_mem.go:36] "Initialized new in-memory state store" Mar 20 08:33:13.346754 master-0 kubenswrapper[4041]: I0320 08:33:13.346720 4041 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 20 08:33:13.351302 master-0 kubenswrapper[4041]: I0320 08:33:13.351237 4041 kubelet.go:418] "Attempting to sync node with API server" Mar 20 08:33:13.351947 master-0 kubenswrapper[4041]: I0320 08:33:13.351900 4041 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 08:33:13.352020 master-0 kubenswrapper[4041]: I0320 08:33:13.352007 4041 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 08:33:13.352073 master-0 kubenswrapper[4041]: I0320 08:33:13.352036 4041 kubelet.go:324] "Adding apiserver pod source" Mar 20 08:33:13.352131 master-0 kubenswrapper[4041]: I0320 08:33:13.352080 4041 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 08:33:13.356897 master-0 kubenswrapper[4041]: W0320 08:33:13.356789 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:13.357029 master-0 kubenswrapper[4041]: E0320 08:33:13.356939 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:13.357029 master-0 kubenswrapper[4041]: W0320 08:33:13.356974 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:13.357207 master-0 kubenswrapper[4041]: E0320 08:33:13.357062 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:13.358397 master-0 kubenswrapper[4041]: I0320 08:33:13.358347 4041 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 20 08:33:13.360339 master-0 kubenswrapper[4041]: I0320 08:33:13.360256 4041 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 08:33:13.360655 master-0 kubenswrapper[4041]: I0320 08:33:13.360605 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 08:33:13.360655 master-0 kubenswrapper[4041]: I0320 08:33:13.360645 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 08:33:13.360655 master-0 kubenswrapper[4041]: I0320 08:33:13.360658 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360673 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360686 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360698 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360713 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360726 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360741 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360774 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360792 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 08:33:13.360847 master-0 kubenswrapper[4041]: I0320 08:33:13.360831 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 08:33:13.362051 master-0 kubenswrapper[4041]: I0320 08:33:13.361998 4041 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 08:33:13.362859 master-0 kubenswrapper[4041]: I0320 08:33:13.362807 4041 server.go:1280] "Started kubelet" Mar 20 08:33:13.364008 master-0 kubenswrapper[4041]: I0320 08:33:13.363926 4041 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 08:33:13.364561 master-0 kubenswrapper[4041]: I0320 08:33:13.364433 4041 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 08:33:13.364711 master-0 kubenswrapper[4041]: I0320 08:33:13.364563 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:13.364711 master-0 kubenswrapper[4041]: I0320 08:33:13.364593 4041 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 20 08:33:13.364726 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 20 08:33:13.365371 master-0 kubenswrapper[4041]: I0320 08:33:13.365320 4041 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 08:33:13.368686 master-0 kubenswrapper[4041]: I0320 08:33:13.368587 4041 server.go:449] "Adding debug handlers to kubelet server" Mar 20 08:33:13.369499 master-0 kubenswrapper[4041]: I0320 08:33:13.369428 4041 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 20 08:33:13.369676 master-0 kubenswrapper[4041]: I0320 08:33:13.369653 4041 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 08:33:13.369986 master-0 kubenswrapper[4041]: E0320 08:33:13.368901 4041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e7f97d77ed152 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.362755922 +0000 UTC m=+0.613101467,LastTimestamp:2026-03-20 08:33:13.362755922 +0000 UTC m=+0.613101467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:13.370147 master-0 kubenswrapper[4041]: E0320 08:33:13.370096 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:13.372204 master-0 kubenswrapper[4041]: I0320 08:33:13.372155 4041 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 20 08:33:13.372710 master-0 kubenswrapper[4041]: I0320 08:33:13.372555 4041 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 20 08:33:13.372788 master-0 kubenswrapper[4041]: I0320 08:33:13.372712 4041 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.373562 4041 factory.go:55] Registering systemd factory Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.373608 4041 factory.go:221] Registration of the systemd container factory successfully Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: W0320 08:33:13.373634 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: E0320 08:33:13.373765 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.373999 4041 reconstruct.go:97] "Volume reconstruction finished" Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.374019 4041 reconciler.go:26] "Reconciler: start to sync state" Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.374060 4041 factory.go:153] Registering CRI-O factory Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.374085 4041 factory.go:221] Registration of the crio container factory successfully Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.374201 4041 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 08:33:13.374248 master-0 kubenswrapper[4041]: I0320 08:33:13.374246 4041 factory.go:103] Registering Raw factory Mar 20 08:33:13.375171 master-0 kubenswrapper[4041]: I0320 08:33:13.374347 4041 manager.go:1196] Started watching for new ooms in manager Mar 20 08:33:13.382222 master-0 kubenswrapper[4041]: I0320 08:33:13.382174 4041 manager.go:319] Starting recovery of all containers Mar 20 08:33:13.385631 master-0 kubenswrapper[4041]: E0320 08:33:13.385591 4041 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 20 08:33:13.385812 master-0 kubenswrapper[4041]: E0320 08:33:13.385697 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 20 08:33:13.408506 master-0 kubenswrapper[4041]: I0320 08:33:13.408438 4041 manager.go:324] Recovery completed Mar 20 08:33:13.422790 master-0 kubenswrapper[4041]: I0320 08:33:13.422735 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.424580 master-0 kubenswrapper[4041]: I0320 08:33:13.424511 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.424580 master-0 kubenswrapper[4041]: I0320 08:33:13.424573 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.424580 master-0 kubenswrapper[4041]: I0320 08:33:13.424584 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.425351 master-0 kubenswrapper[4041]: I0320 08:33:13.425323 4041 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 20 08:33:13.425351 master-0 kubenswrapper[4041]: I0320 08:33:13.425348 4041 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 20 08:33:13.425559 master-0 kubenswrapper[4041]: I0320 08:33:13.425371 4041 state_mem.go:36] "Initialized new in-memory state store" Mar 20 08:33:13.430605 master-0 kubenswrapper[4041]: I0320 08:33:13.430556 4041 policy_none.go:49] "None policy: Start" Mar 20 08:33:13.431237 master-0 kubenswrapper[4041]: I0320 08:33:13.431195 4041 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 08:33:13.431237 master-0 kubenswrapper[4041]: I0320 08:33:13.431234 4041 state_mem.go:35] "Initializing new in-memory state store" Mar 20 08:33:13.470919 master-0 kubenswrapper[4041]: E0320 08:33:13.470632 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:13.511733 master-0 kubenswrapper[4041]: I0320 08:33:13.511703 4041 manager.go:334] "Starting Device Plugin manager" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.511937 4041 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.511957 4041 server.go:79] "Starting device plugin registration server" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.512550 4041 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.512566 4041 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.513032 4041 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.513134 4041 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.513145 4041 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: E0320 08:33:13.514897 4041 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.531504 4041 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.534306 4041 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.534376 4041 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: I0320 08:33:13.534404 4041 kubelet.go:2335] "Starting kubelet main sync loop" Mar 20 08:33:13.534823 master-0 kubenswrapper[4041]: E0320 08:33:13.534580 4041 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 20 08:33:13.535973 master-0 kubenswrapper[4041]: W0320 08:33:13.535928 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:13.536088 master-0 kubenswrapper[4041]: E0320 08:33:13.536067 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:13.588066 master-0 kubenswrapper[4041]: E0320 08:33:13.587948 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 20 08:33:13.613089 master-0 kubenswrapper[4041]: I0320 08:33:13.613025 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.614344 master-0 kubenswrapper[4041]: I0320 08:33:13.614300 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.614421 master-0 kubenswrapper[4041]: I0320 08:33:13.614355 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.614421 master-0 kubenswrapper[4041]: I0320 08:33:13.614373 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.614421 master-0 kubenswrapper[4041]: I0320 08:33:13.614415 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:13.615363 master-0 kubenswrapper[4041]: E0320 08:33:13.615306 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 20 08:33:13.636088 master-0 kubenswrapper[4041]: I0320 08:33:13.636015 4041 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 20 08:33:13.636194 master-0 kubenswrapper[4041]: I0320 08:33:13.636110 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.637405 master-0 kubenswrapper[4041]: I0320 08:33:13.637361 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.637405 master-0 kubenswrapper[4041]: I0320 08:33:13.637406 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.637565 master-0 kubenswrapper[4041]: I0320 08:33:13.637419 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.637655 master-0 kubenswrapper[4041]: I0320 08:33:13.637627 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.637977 master-0 kubenswrapper[4041]: I0320 08:33:13.637940 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.638050 master-0 kubenswrapper[4041]: I0320 08:33:13.637999 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.638671 master-0 kubenswrapper[4041]: I0320 08:33:13.638612 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.638671 master-0 kubenswrapper[4041]: I0320 08:33:13.638664 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.638884 master-0 kubenswrapper[4041]: I0320 08:33:13.638682 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.638884 master-0 kubenswrapper[4041]: I0320 08:33:13.638833 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.638884 master-0 kubenswrapper[4041]: I0320 08:33:13.638853 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.638884 master-0 kubenswrapper[4041]: I0320 08:33:13.638883 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.639030 master-0 kubenswrapper[4041]: I0320 08:33:13.638901 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.639030 master-0 kubenswrapper[4041]: I0320 08:33:13.638954 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.639030 master-0 kubenswrapper[4041]: I0320 08:33:13.639011 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.639799 master-0 kubenswrapper[4041]: I0320 08:33:13.639778 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.639872 master-0 kubenswrapper[4041]: I0320 08:33:13.639808 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.639872 master-0 kubenswrapper[4041]: I0320 08:33:13.639819 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.639983 master-0 kubenswrapper[4041]: I0320 08:33:13.639961 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.640054 master-0 kubenswrapper[4041]: I0320 08:33:13.640029 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.640099 master-0 kubenswrapper[4041]: I0320 08:33:13.640064 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.640175 master-0 kubenswrapper[4041]: I0320 08:33:13.640145 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.640175 master-0 kubenswrapper[4041]: I0320 08:33:13.640165 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.640175 master-0 kubenswrapper[4041]: I0320 08:33:13.640175 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.640700 master-0 kubenswrapper[4041]: I0320 08:33:13.640672 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.640769 master-0 kubenswrapper[4041]: I0320 08:33:13.640708 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.640769 master-0 kubenswrapper[4041]: I0320 08:33:13.640727 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.640873 master-0 kubenswrapper[4041]: I0320 08:33:13.640849 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.640922 master-0 kubenswrapper[4041]: I0320 08:33:13.640895 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.640922 master-0 kubenswrapper[4041]: I0320 08:33:13.640912 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.640990 master-0 kubenswrapper[4041]: I0320 08:33:13.640922 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.641042 master-0 kubenswrapper[4041]: I0320 08:33:13.641020 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.641080 master-0 kubenswrapper[4041]: I0320 08:33:13.641065 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.641970 master-0 kubenswrapper[4041]: I0320 08:33:13.641938 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.642046 master-0 kubenswrapper[4041]: I0320 08:33:13.641993 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.642046 master-0 kubenswrapper[4041]: I0320 08:33:13.641998 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.642046 master-0 kubenswrapper[4041]: I0320 08:33:13.642022 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.642046 master-0 kubenswrapper[4041]: I0320 08:33:13.642037 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.642178 master-0 kubenswrapper[4041]: I0320 08:33:13.642054 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.642239 master-0 kubenswrapper[4041]: I0320 08:33:13.642210 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.642300 master-0 kubenswrapper[4041]: I0320 08:33:13.642253 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.643276 master-0 kubenswrapper[4041]: I0320 08:33:13.643210 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.643335 master-0 kubenswrapper[4041]: I0320 08:33:13.643319 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.643373 master-0 kubenswrapper[4041]: I0320 08:33:13.643341 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.675411 master-0 kubenswrapper[4041]: I0320 08:33:13.675358 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.675411 master-0 kubenswrapper[4041]: I0320 08:33:13.675405 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.675631 master-0 kubenswrapper[4041]: I0320 08:33:13.675450 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.675631 master-0 kubenswrapper[4041]: I0320 08:33:13.675469 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.675631 master-0 kubenswrapper[4041]: I0320 08:33:13.675597 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.675738 master-0 kubenswrapper[4041]: I0320 08:33:13.675659 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.675738 master-0 kubenswrapper[4041]: I0320 08:33:13.675711 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.675816 master-0 kubenswrapper[4041]: I0320 08:33:13.675754 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.675816 master-0 kubenswrapper[4041]: I0320 08:33:13.675793 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.675888 master-0 kubenswrapper[4041]: I0320 08:33:13.675847 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.675925 master-0 kubenswrapper[4041]: I0320 08:33:13.675889 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.675969 master-0 kubenswrapper[4041]: I0320 08:33:13.675932 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.676013 master-0 kubenswrapper[4041]: I0320 08:33:13.675972 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.676054 master-0 kubenswrapper[4041]: I0320 08:33:13.676010 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.676093 master-0 kubenswrapper[4041]: I0320 08:33:13.676055 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.676137 master-0 kubenswrapper[4041]: I0320 08:33:13.676094 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.676137 master-0 kubenswrapper[4041]: I0320 08:33:13.676133 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.777180 master-0 kubenswrapper[4041]: I0320 08:33:13.777133 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777310 master-0 kubenswrapper[4041]: I0320 08:33:13.777189 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777310 master-0 kubenswrapper[4041]: I0320 08:33:13.777284 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777391 master-0 kubenswrapper[4041]: I0320 08:33:13.777354 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777427 master-0 kubenswrapper[4041]: I0320 08:33:13.777380 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777463 master-0 kubenswrapper[4041]: I0320 08:33:13.777423 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777463 master-0 kubenswrapper[4041]: I0320 08:33:13.777426 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777463 master-0 kubenswrapper[4041]: I0320 08:33:13.777385 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777686 master-0 kubenswrapper[4041]: I0320 08:33:13.777472 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777686 master-0 kubenswrapper[4041]: I0320 08:33:13.777500 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777686 master-0 kubenswrapper[4041]: I0320 08:33:13.777523 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.777686 master-0 kubenswrapper[4041]: I0320 08:33:13.777595 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.777686 master-0 kubenswrapper[4041]: I0320 08:33:13.777622 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777851 master-0 kubenswrapper[4041]: I0320 08:33:13.777704 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.777851 master-0 kubenswrapper[4041]: I0320 08:33:13.777741 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777851 master-0 kubenswrapper[4041]: I0320 08:33:13.777764 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777851 master-0 kubenswrapper[4041]: I0320 08:33:13.777804 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.777851 master-0 kubenswrapper[4041]: I0320 08:33:13.777827 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.778026 master-0 kubenswrapper[4041]: I0320 08:33:13.777899 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.778026 master-0 kubenswrapper[4041]: I0320 08:33:13.777948 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.778026 master-0 kubenswrapper[4041]: I0320 08:33:13.777978 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.778026 master-0 kubenswrapper[4041]: I0320 08:33:13.777985 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.778026 master-0 kubenswrapper[4041]: I0320 08:33:13.778025 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.778192 master-0 kubenswrapper[4041]: I0320 08:33:13.778033 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.778192 master-0 kubenswrapper[4041]: I0320 08:33:13.778063 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.778192 master-0 kubenswrapper[4041]: I0320 08:33:13.778077 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.778192 master-0 kubenswrapper[4041]: I0320 08:33:13.778113 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.778192 master-0 kubenswrapper[4041]: I0320 08:33:13.778124 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.778380 master-0 kubenswrapper[4041]: I0320 08:33:13.778196 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:13.778380 master-0 kubenswrapper[4041]: I0320 08:33:13.778206 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:13.778380 master-0 kubenswrapper[4041]: I0320 08:33:13.778279 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.778380 master-0 kubenswrapper[4041]: I0320 08:33:13.778331 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.778527 master-0 kubenswrapper[4041]: I0320 08:33:13.778389 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:13.778527 master-0 kubenswrapper[4041]: I0320 08:33:13.778391 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:13.815526 master-0 kubenswrapper[4041]: I0320 08:33:13.815482 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:13.817051 master-0 kubenswrapper[4041]: I0320 08:33:13.817004 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:13.817113 master-0 kubenswrapper[4041]: I0320 08:33:13.817066 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:13.817113 master-0 kubenswrapper[4041]: I0320 08:33:13.817089 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:13.817179 master-0 kubenswrapper[4041]: I0320 08:33:13.817163 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:13.818548 master-0 kubenswrapper[4041]: E0320 08:33:13.818472 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 20 08:33:13.981208 master-0 kubenswrapper[4041]: I0320 08:33:13.981087 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:33:13.990146 master-0 kubenswrapper[4041]: E0320 08:33:13.990077 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 20 08:33:14.008215 master-0 kubenswrapper[4041]: I0320 08:33:14.008141 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:33:14.024116 master-0 kubenswrapper[4041]: I0320 08:33:14.024071 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:14.035610 master-0 kubenswrapper[4041]: I0320 08:33:14.035533 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:14.069895 master-0 kubenswrapper[4041]: I0320 08:33:14.069815 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:33:14.219575 master-0 kubenswrapper[4041]: I0320 08:33:14.219477 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:14.221133 master-0 kubenswrapper[4041]: I0320 08:33:14.221076 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:14.221245 master-0 kubenswrapper[4041]: I0320 08:33:14.221140 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:14.221245 master-0 kubenswrapper[4041]: I0320 08:33:14.221158 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:14.221245 master-0 kubenswrapper[4041]: I0320 08:33:14.221227 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:14.222503 master-0 kubenswrapper[4041]: E0320 08:33:14.222430 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 20 08:33:14.366579 master-0 kubenswrapper[4041]: I0320 08:33:14.366479 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:14.399932 master-0 kubenswrapper[4041]: W0320 08:33:14.399797 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:14.399932 master-0 kubenswrapper[4041]: E0320 08:33:14.399915 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:14.454888 master-0 kubenswrapper[4041]: W0320 08:33:14.454764 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:14.454888 master-0 kubenswrapper[4041]: E0320 08:33:14.454884 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:14.744359 master-0 kubenswrapper[4041]: W0320 08:33:14.744067 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:14.744359 master-0 kubenswrapper[4041]: E0320 08:33:14.744177 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:14.791357 master-0 kubenswrapper[4041]: E0320 08:33:14.791257 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 20 08:33:14.930953 master-0 kubenswrapper[4041]: W0320 08:33:14.930871 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:14.930953 master-0 kubenswrapper[4041]: E0320 08:33:14.930950 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:15.023847 master-0 kubenswrapper[4041]: I0320 08:33:15.023661 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:15.026362 master-0 kubenswrapper[4041]: I0320 08:33:15.026300 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:15.026362 master-0 kubenswrapper[4041]: I0320 08:33:15.026364 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:15.026589 master-0 kubenswrapper[4041]: I0320 08:33:15.026383 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:15.026589 master-0 kubenswrapper[4041]: I0320 08:33:15.026459 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:15.027668 master-0 kubenswrapper[4041]: E0320 08:33:15.027581 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 20 08:33:15.031400 master-0 kubenswrapper[4041]: W0320 08:33:15.031319 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1249822f86f23526277d165c0d5d3c19.slice/crio-a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957 WatchSource:0}: Error finding container a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957: Status 404 returned error can't find the container with id a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957 Mar 20 08:33:15.032983 master-0 kubenswrapper[4041]: W0320 08:33:15.032900 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc83737980b9ee109184b1d78e942cf36.slice/crio-314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332 WatchSource:0}: Error finding container 314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332: Status 404 returned error can't find the container with id 314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332 Mar 20 08:33:15.039046 master-0 kubenswrapper[4041]: I0320 08:33:15.038933 4041 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:33:15.072183 master-0 kubenswrapper[4041]: W0320 08:33:15.072112 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49fac1b46a11e49501805e891baae4a9.slice/crio-89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0 WatchSource:0}: Error finding container 89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0: Status 404 returned error can't find the container with id 89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0 Mar 20 08:33:15.102424 master-0 kubenswrapper[4041]: W0320 08:33:15.102347 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46f265536aba6292ead501bc9b49f327.slice/crio-1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50 WatchSource:0}: Error finding container 1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50: Status 404 returned error can't find the container with id 1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50 Mar 20 08:33:15.127346 master-0 kubenswrapper[4041]: W0320 08:33:15.127290 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd664a6d0d2a24360dee10612610f1b59.slice/crio-1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0 WatchSource:0}: Error finding container 1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0: Status 404 returned error can't find the container with id 1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0 Mar 20 08:33:15.366532 master-0 kubenswrapper[4041]: I0320 08:33:15.366450 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:15.450617 master-0 kubenswrapper[4041]: I0320 08:33:15.450408 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 08:33:15.452436 master-0 kubenswrapper[4041]: E0320 08:33:15.452317 4041 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:15.543643 master-0 kubenswrapper[4041]: I0320 08:33:15.543468 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50"} Mar 20 08:33:15.545046 master-0 kubenswrapper[4041]: I0320 08:33:15.544977 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0"} Mar 20 08:33:15.546625 master-0 kubenswrapper[4041]: I0320 08:33:15.546583 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957"} Mar 20 08:33:15.547984 master-0 kubenswrapper[4041]: I0320 08:33:15.547927 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332"} Mar 20 08:33:15.549074 master-0 kubenswrapper[4041]: I0320 08:33:15.549008 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0"} Mar 20 08:33:16.305529 master-0 kubenswrapper[4041]: W0320 08:33:16.305484 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:16.305740 master-0 kubenswrapper[4041]: E0320 08:33:16.305540 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:16.366124 master-0 kubenswrapper[4041]: I0320 08:33:16.366069 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:16.392378 master-0 kubenswrapper[4041]: E0320 08:33:16.392305 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 20 08:33:16.627773 master-0 kubenswrapper[4041]: I0320 08:33:16.627714 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:16.628731 master-0 kubenswrapper[4041]: I0320 08:33:16.628696 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:16.628731 master-0 kubenswrapper[4041]: I0320 08:33:16.628732 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:16.628824 master-0 kubenswrapper[4041]: I0320 08:33:16.628741 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:16.628824 master-0 kubenswrapper[4041]: I0320 08:33:16.628781 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:16.629715 master-0 kubenswrapper[4041]: E0320 08:33:16.629651 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 20 08:33:16.800538 master-0 kubenswrapper[4041]: E0320 08:33:16.800405 4041 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189e7f97d77ed152 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.362755922 +0000 UTC m=+0.613101467,LastTimestamp:2026-03-20 08:33:13.362755922 +0000 UTC m=+0.613101467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:17.111853 master-0 kubenswrapper[4041]: W0320 08:33:17.111758 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:17.112036 master-0 kubenswrapper[4041]: E0320 08:33:17.111857 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:17.298174 master-0 kubenswrapper[4041]: W0320 08:33:17.298016 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:17.298174 master-0 kubenswrapper[4041]: E0320 08:33:17.298119 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:17.315406 master-0 kubenswrapper[4041]: W0320 08:33:17.315327 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:17.315513 master-0 kubenswrapper[4041]: E0320 08:33:17.315421 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:17.365723 master-0 kubenswrapper[4041]: I0320 08:33:17.365686 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:17.556007 master-0 kubenswrapper[4041]: I0320 08:33:17.555894 4041 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="f46237550fb6588ccbb218d4b52be58120b3dd1d98e107a7ca8477306baad5dd" exitCode=0 Mar 20 08:33:17.556007 master-0 kubenswrapper[4041]: I0320 08:33:17.555950 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"f46237550fb6588ccbb218d4b52be58120b3dd1d98e107a7ca8477306baad5dd"} Mar 20 08:33:17.556007 master-0 kubenswrapper[4041]: I0320 08:33:17.555987 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:17.557854 master-0 kubenswrapper[4041]: I0320 08:33:17.557309 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:17.557854 master-0 kubenswrapper[4041]: I0320 08:33:17.557344 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:17.557854 master-0 kubenswrapper[4041]: I0320 08:33:17.557355 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:18.366646 master-0 kubenswrapper[4041]: I0320 08:33:18.366606 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:18.560326 master-0 kubenswrapper[4041]: I0320 08:33:18.560235 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"bf19448fe2db422f2021f6a9801b4117923acb1b2003982f366081b4de585441"} Mar 20 08:33:18.560326 master-0 kubenswrapper[4041]: I0320 08:33:18.560304 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"d7f4830141ed7d49d20e31769c038ca8340ad71b0bddea39298dca3d6416b345"} Mar 20 08:33:18.560540 master-0 kubenswrapper[4041]: I0320 08:33:18.560355 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:18.561443 master-0 kubenswrapper[4041]: I0320 08:33:18.561020 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:18.561443 master-0 kubenswrapper[4041]: I0320 08:33:18.561060 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:18.561443 master-0 kubenswrapper[4041]: I0320 08:33:18.561068 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:18.563977 master-0 kubenswrapper[4041]: I0320 08:33:18.563913 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 20 08:33:18.564486 master-0 kubenswrapper[4041]: I0320 08:33:18.564461 4041 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="0a7050cc67496a51e9fa301124c2bfc559a5d573e02e522486a3787f32ef4a96" exitCode=1 Mar 20 08:33:18.564567 master-0 kubenswrapper[4041]: I0320 08:33:18.564488 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"0a7050cc67496a51e9fa301124c2bfc559a5d573e02e522486a3787f32ef4a96"} Mar 20 08:33:18.564567 master-0 kubenswrapper[4041]: I0320 08:33:18.564541 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:18.565295 master-0 kubenswrapper[4041]: I0320 08:33:18.565243 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:18.565295 master-0 kubenswrapper[4041]: I0320 08:33:18.565280 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:18.565295 master-0 kubenswrapper[4041]: I0320 08:33:18.565291 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:18.565533 master-0 kubenswrapper[4041]: I0320 08:33:18.565514 4041 scope.go:117] "RemoveContainer" containerID="0a7050cc67496a51e9fa301124c2bfc559a5d573e02e522486a3787f32ef4a96" Mar 20 08:33:19.366064 master-0 kubenswrapper[4041]: I0320 08:33:19.366012 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:19.457678 master-0 kubenswrapper[4041]: I0320 08:33:19.457609 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 08:33:19.458940 master-0 kubenswrapper[4041]: E0320 08:33:19.458838 4041 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:19.568439 master-0 kubenswrapper[4041]: I0320 08:33:19.568403 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 20 08:33:19.568810 master-0 kubenswrapper[4041]: I0320 08:33:19.568785 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/0.log" Mar 20 08:33:19.569624 master-0 kubenswrapper[4041]: I0320 08:33:19.569138 4041 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="a6df4568a5f97beeb663f1ff1945ac9d3c4c5e1ff252f4e472ff1399d4059194" exitCode=1 Mar 20 08:33:19.569624 master-0 kubenswrapper[4041]: I0320 08:33:19.569211 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"a6df4568a5f97beeb663f1ff1945ac9d3c4c5e1ff252f4e472ff1399d4059194"} Mar 20 08:33:19.569624 master-0 kubenswrapper[4041]: I0320 08:33:19.569280 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:19.569624 master-0 kubenswrapper[4041]: I0320 08:33:19.569284 4041 scope.go:117] "RemoveContainer" containerID="0a7050cc67496a51e9fa301124c2bfc559a5d573e02e522486a3787f32ef4a96" Mar 20 08:33:19.569624 master-0 kubenswrapper[4041]: I0320 08:33:19.569223 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571196 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571223 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571241 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571367 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571387 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571394 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: I0320 08:33:19.571647 4041 scope.go:117] "RemoveContainer" containerID="a6df4568a5f97beeb663f1ff1945ac9d3c4c5e1ff252f4e472ff1399d4059194" Mar 20 08:33:19.571832 master-0 kubenswrapper[4041]: E0320 08:33:19.571823 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 20 08:33:19.593433 master-0 kubenswrapper[4041]: E0320 08:33:19.593370 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 20 08:33:19.830733 master-0 kubenswrapper[4041]: I0320 08:33:19.830675 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:19.831837 master-0 kubenswrapper[4041]: I0320 08:33:19.831720 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:19.831837 master-0 kubenswrapper[4041]: I0320 08:33:19.831755 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:19.831837 master-0 kubenswrapper[4041]: I0320 08:33:19.831764 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:19.831837 master-0 kubenswrapper[4041]: I0320 08:33:19.831799 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:19.832465 master-0 kubenswrapper[4041]: E0320 08:33:19.832427 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 20 08:33:20.117534 master-0 kubenswrapper[4041]: W0320 08:33:20.117425 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:20.117534 master-0 kubenswrapper[4041]: E0320 08:33:20.117507 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:20.366173 master-0 kubenswrapper[4041]: I0320 08:33:20.366118 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:20.571590 master-0 kubenswrapper[4041]: I0320 08:33:20.571110 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:20.572888 master-0 kubenswrapper[4041]: I0320 08:33:20.572459 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:20.572888 master-0 kubenswrapper[4041]: I0320 08:33:20.572489 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:20.572888 master-0 kubenswrapper[4041]: I0320 08:33:20.572498 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:20.572888 master-0 kubenswrapper[4041]: I0320 08:33:20.572727 4041 scope.go:117] "RemoveContainer" containerID="a6df4568a5f97beeb663f1ff1945ac9d3c4c5e1ff252f4e472ff1399d4059194" Mar 20 08:33:20.572888 master-0 kubenswrapper[4041]: E0320 08:33:20.572846 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 20 08:33:21.155595 master-0 kubenswrapper[4041]: W0320 08:33:21.155447 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:21.155595 master-0 kubenswrapper[4041]: E0320 08:33:21.155522 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:21.366194 master-0 kubenswrapper[4041]: I0320 08:33:21.366133 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:22.211575 master-0 kubenswrapper[4041]: W0320 08:33:22.211457 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:22.211575 master-0 kubenswrapper[4041]: E0320 08:33:22.211551 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:22.367205 master-0 kubenswrapper[4041]: I0320 08:33:22.367091 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:22.449401 master-0 kubenswrapper[4041]: W0320 08:33:22.449299 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:22.449598 master-0 kubenswrapper[4041]: E0320 08:33:22.449432 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 20 08:33:23.367002 master-0 kubenswrapper[4041]: I0320 08:33:23.366894 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:23.515186 master-0 kubenswrapper[4041]: E0320 08:33:23.515104 4041 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 20 08:33:23.579975 master-0 kubenswrapper[4041]: I0320 08:33:23.579870 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 20 08:33:24.366636 master-0 kubenswrapper[4041]: I0320 08:33:24.366583 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 20 08:33:24.584870 master-0 kubenswrapper[4041]: I0320 08:33:24.584790 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"9ab09f622201f872e04a1e8a769261f4a46a4d60637dffa9e2a3458905508cd2"} Mar 20 08:33:24.586824 master-0 kubenswrapper[4041]: I0320 08:33:24.586760 4041 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8" exitCode=0 Mar 20 08:33:24.586824 master-0 kubenswrapper[4041]: I0320 08:33:24.586815 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8"} Mar 20 08:33:24.587059 master-0 kubenswrapper[4041]: I0320 08:33:24.586935 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:24.588047 master-0 kubenswrapper[4041]: I0320 08:33:24.588007 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:24.588117 master-0 kubenswrapper[4041]: I0320 08:33:24.588047 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:24.588117 master-0 kubenswrapper[4041]: I0320 08:33:24.588064 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:24.589434 master-0 kubenswrapper[4041]: I0320 08:33:24.589397 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:24.589576 master-0 kubenswrapper[4041]: I0320 08:33:24.589372 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"f3cf6c6c759bc79e0c49a7c2679b7d5ff1593a53a6783b3355ac6464233ad33d"} Mar 20 08:33:24.590086 master-0 kubenswrapper[4041]: I0320 08:33:24.590034 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:24.590159 master-0 kubenswrapper[4041]: I0320 08:33:24.590104 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:24.590159 master-0 kubenswrapper[4041]: I0320 08:33:24.590129 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:24.592302 master-0 kubenswrapper[4041]: I0320 08:33:24.592251 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:24.592853 master-0 kubenswrapper[4041]: I0320 08:33:24.592802 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:24.592918 master-0 kubenswrapper[4041]: I0320 08:33:24.592860 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:24.592918 master-0 kubenswrapper[4041]: I0320 08:33:24.592879 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:25.596390 master-0 kubenswrapper[4041]: I0320 08:33:25.596010 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c"} Mar 20 08:33:25.596390 master-0 kubenswrapper[4041]: I0320 08:33:25.596049 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:25.597325 master-0 kubenswrapper[4041]: I0320 08:33:25.597053 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:25.597325 master-0 kubenswrapper[4041]: I0320 08:33:25.597083 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:25.597325 master-0 kubenswrapper[4041]: I0320 08:33:25.597094 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:26.232646 master-0 kubenswrapper[4041]: I0320 08:33:26.232513 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:26.234447 master-0 kubenswrapper[4041]: I0320 08:33:26.234395 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:26.234447 master-0 kubenswrapper[4041]: I0320 08:33:26.234448 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:26.234630 master-0 kubenswrapper[4041]: I0320 08:33:26.234476 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:26.234630 master-0 kubenswrapper[4041]: I0320 08:33:26.234572 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:26.615717 master-0 kubenswrapper[4041]: I0320 08:33:26.615648 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:26.621190 master-0 kubenswrapper[4041]: E0320 08:33:26.620540 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 20 08:33:26.621190 master-0 kubenswrapper[4041]: E0320 08:33:26.620603 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 08:33:26.743306 master-0 kubenswrapper[4041]: W0320 08:33:26.743254 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 20 08:33:26.743495 master-0 kubenswrapper[4041]: E0320 08:33:26.743324 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 20 08:33:26.825914 master-0 kubenswrapper[4041]: E0320 08:33:26.816597 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97d77ed152 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.362755922 +0000 UTC m=+0.613101467,LastTimestamp:2026-03-20 08:33:13.362755922 +0000 UTC m=+0.613101467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.828634 master-0 kubenswrapper[4041]: E0320 08:33:26.828506 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.838333 master-0 kubenswrapper[4041]: E0320 08:33:26.834571 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.860559 master-0 kubenswrapper[4041]: E0320 08:33:26.859508 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.871929 master-0 kubenswrapper[4041]: E0320 08:33:26.870022 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97e098327b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.515414139 +0000 UTC m=+0.765759644,LastTimestamp:2026-03-20 08:33:13.515414139 +0000 UTC m=+0.765759644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.876615 master-0 kubenswrapper[4041]: E0320 08:33:26.876519 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.614333757 +0000 UTC m=+0.864679302,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.882000 master-0 kubenswrapper[4041]: E0320 08:33:26.881918 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.614365908 +0000 UTC m=+0.864711443,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.885760 master-0 kubenswrapper[4041]: E0320 08:33:26.885657 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e5635\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.614382888 +0000 UTC m=+0.864728423,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.890494 master-0 kubenswrapper[4041]: E0320 08:33:26.890408 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.637392575 +0000 UTC m=+0.887738090,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.894709 master-0 kubenswrapper[4041]: E0320 08:33:26.894599 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.637414546 +0000 UTC m=+0.887760061,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.898771 master-0 kubenswrapper[4041]: E0320 08:33:26.898691 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e5635\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.637427396 +0000 UTC m=+0.887772911,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.902086 master-0 kubenswrapper[4041]: E0320 08:33:26.902017 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.638643896 +0000 UTC m=+0.888989431,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.906390 master-0 kubenswrapper[4041]: E0320 08:33:26.906333 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.638675646 +0000 UTC m=+0.889021181,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.910459 master-0 kubenswrapper[4041]: E0320 08:33:26.910373 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e5635\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.638691807 +0000 UTC m=+0.889037342,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.914893 master-0 kubenswrapper[4041]: E0320 08:33:26.914824 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.638875229 +0000 UTC m=+0.889220784,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.918142 master-0 kubenswrapper[4041]: E0320 08:33:26.918068 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.63889475 +0000 UTC m=+0.889240295,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.922127 master-0 kubenswrapper[4041]: E0320 08:33:26.922045 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e5635\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.6389115 +0000 UTC m=+0.889257045,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.925522 master-0 kubenswrapper[4041]: E0320 08:33:26.925432 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.639795684 +0000 UTC m=+0.890141209,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.929339 master-0 kubenswrapper[4041]: E0320 08:33:26.929258 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.639815224 +0000 UTC m=+0.890160739,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.933070 master-0 kubenswrapper[4041]: E0320 08:33:26.932984 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e5635\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.639852945 +0000 UTC m=+0.890198460,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.959982 master-0 kubenswrapper[4041]: E0320 08:33:26.959850 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.640155849 +0000 UTC m=+0.890501364,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.964501 master-0 kubenswrapper[4041]: E0320 08:33:26.964410 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.64017179 +0000 UTC m=+0.890517305,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.968297 master-0 kubenswrapper[4041]: E0320 08:33:26.968172 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e5635\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e5635 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424590389 +0000 UTC m=+0.674935894,LastTimestamp:2026-03-20 08:33:13.6401815 +0000 UTC m=+0.890527015,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.973190 master-0 kubenswrapper[4041]: E0320 08:33:26.973102 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2de529\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2de529 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424561449 +0000 UTC m=+0.674906954,LastTimestamp:2026-03-20 08:33:13.640699219 +0000 UTC m=+0.891044774,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.976889 master-0 kubenswrapper[4041]: E0320 08:33:26.976815 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189e7f97db2e2f6b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189e7f97db2e2f6b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:13.424580459 +0000 UTC m=+0.674925964,LastTimestamp:2026-03-20 08:33:13.640720399 +0000 UTC m=+0.891065944,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.981853 master-0 kubenswrapper[4041]: E0320 08:33:26.981760 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f983b65fb46 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:15.038849862 +0000 UTC m=+2.289195407,LastTimestamp:2026-03-20 08:33:15.038849862 +0000 UTC m=+2.289195407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.987368 master-0 kubenswrapper[4041]: E0320 08:33:26.987121 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e7f983b666252 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:15.038876242 +0000 UTC m=+2.289221777,LastTimestamp:2026-03-20 08:33:15.038876242 +0000 UTC m=+2.289221777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.991243 master-0 kubenswrapper[4041]: E0320 08:33:26.991133 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f983deafd48 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:15.081121096 +0000 UTC m=+2.331466631,LastTimestamp:2026-03-20 08:33:15.081121096 +0000 UTC m=+2.331466631,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.995155 master-0 kubenswrapper[4041]: E0320 08:33:26.995059 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f983f9d654e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:15.10959035 +0000 UTC m=+2.359935895,LastTimestamp:2026-03-20 08:33:15.10959035 +0000 UTC m=+2.359935895,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:26.998652 master-0 kubenswrapper[4041]: E0320 08:33:26.998580 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f9840da8fe2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:15.130376162 +0000 UTC m=+2.380721697,LastTimestamp:2026-03-20 08:33:15.130376162 +0000 UTC m=+2.380721697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.002929 master-0 kubenswrapper[4041]: E0320 08:33:27.002823 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98a4d87c49 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" in 1.769s (1.769s including waiting). Image size: 465090934 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:16.807961673 +0000 UTC m=+4.058307178,LastTimestamp:2026-03-20 08:33:16.807961673 +0000 UTC m=+4.058307178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.006102 master-0 kubenswrapper[4041]: E0320 08:33:27.006018 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98af15e1aa openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:16.979757482 +0000 UTC m=+4.230102997,LastTimestamp:2026-03-20 08:33:16.979757482 +0000 UTC m=+4.230102997,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.008985 master-0 kubenswrapper[4041]: E0320 08:33:27.008893 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98b00de63d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:16.996011581 +0000 UTC m=+4.246357086,LastTimestamp:2026-03-20 08:33:16.996011581 +0000 UTC m=+4.246357086,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.012412 master-0 kubenswrapper[4041]: E0320 08:33:27.012313 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98e05c1ced openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.806443757 +0000 UTC m=+5.056789262,LastTimestamp:2026-03-20 08:33:17.806443757 +0000 UTC m=+5.056789262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.016212 master-0 kubenswrapper[4041]: E0320 08:33:27.016065 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f98e3025e43 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" in 2.72s (2.72s including waiting). Image size: 529326739 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.850893891 +0000 UTC m=+5.101239396,LastTimestamp:2026-03-20 08:33:17.850893891 +0000 UTC m=+5.101239396,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.020472 master-0 kubenswrapper[4041]: E0320 08:33:27.020371 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98ea0b8182 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.96893325 +0000 UTC m=+5.219278755,LastTimestamp:2026-03-20 08:33:17.96893325 +0000 UTC m=+5.219278755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.026408 master-0 kubenswrapper[4041]: E0320 08:33:27.026014 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98eb837fbe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.993574334 +0000 UTC m=+5.243919839,LastTimestamp:2026-03-20 08:33:17.993574334 +0000 UTC m=+5.243919839,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.030233 master-0 kubenswrapper[4041]: E0320 08:33:27.030099 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f98ee1dac58 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:18.037232728 +0000 UTC m=+5.287578233,LastTimestamp:2026-03-20 08:33:18.037232728 +0000 UTC m=+5.287578233,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.033981 master-0 kubenswrapper[4041]: E0320 08:33:27.033893 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f98ef115620 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:18.05320144 +0000 UTC m=+5.303546945,LastTimestamp:2026-03-20 08:33:18.05320144 +0000 UTC m=+5.303546945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.038446 master-0 kubenswrapper[4041]: E0320 08:33:27.038360 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f98efa21b50 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:18.062689104 +0000 UTC m=+5.313034609,LastTimestamp:2026-03-20 08:33:18.062689104 +0000 UTC m=+5.313034609,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.042991 master-0 kubenswrapper[4041]: E0320 08:33:27.042869 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f98f9b3c4b6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:18.231618742 +0000 UTC m=+5.481964247,LastTimestamp:2026-03-20 08:33:18.231618742 +0000 UTC m=+5.481964247,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.060767 master-0 kubenswrapper[4041]: E0320 08:33:27.060512 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7f98fa5c7d77 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:18.242676087 +0000 UTC m=+5.493021592,LastTimestamp:2026-03-20 08:33:18.242676087 +0000 UTC m=+5.493021592,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.069541 master-0 kubenswrapper[4041]: E0320 08:33:27.069422 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f98e05c1ced\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98e05c1ced openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.806443757 +0000 UTC m=+5.056789262,LastTimestamp:2026-03-20 08:33:18.56866638 +0000 UTC m=+5.819011895,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.074332 master-0 kubenswrapper[4041]: E0320 08:33:27.074171 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f98ea0b8182\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98ea0b8182 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.96893325 +0000 UTC m=+5.219278755,LastTimestamp:2026-03-20 08:33:18.743132661 +0000 UTC m=+5.993478186,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.080024 master-0 kubenswrapper[4041]: E0320 08:33:27.079819 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f98eb837fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98eb837fbe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.993574334 +0000 UTC m=+5.243919839,LastTimestamp:2026-03-20 08:33:18.760098768 +0000 UTC m=+6.010444273,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.085354 master-0 kubenswrapper[4041]: E0320 08:33:27.085174 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f99499507da openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:19.571781594 +0000 UTC m=+6.822127099,LastTimestamp:2026-03-20 08:33:19.571781594 +0000 UTC m=+6.822127099,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.089822 master-0 kubenswrapper[4041]: E0320 08:33:27.089748 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f99499507da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f99499507da openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:19.571781594 +0000 UTC m=+6.822127099,LastTimestamp:2026-03-20 08:33:20.572821405 +0000 UTC m=+7.823166910,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.093635 master-0 kubenswrapper[4041]: E0320 08:33:27.093534 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a52e759d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 8.941s (8.941s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.023138772 +0000 UTC m=+11.273484277,LastTimestamp:2026-03-20 08:33:24.023138772 +0000 UTC m=+11.273484277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.099859 master-0 kubenswrapper[4041]: E0320 08:33:27.099722 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e7f9a5e11e61f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 9.171s (9.171s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.210476575 +0000 UTC m=+11.460822120,LastTimestamp:2026-03-20 08:33:24.210476575 +0000 UTC m=+11.460822120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.104323 master-0 kubenswrapper[4041]: E0320 08:33:27.104165 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a623e45c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.280493507 +0000 UTC m=+11.530839022,LastTimestamp:2026-03-20 08:33:24.280493507 +0000 UTC m=+11.530839022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.110557 master-0 kubenswrapper[4041]: E0320 08:33:27.110445 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a631ec156 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.295205206 +0000 UTC m=+11.545550721,LastTimestamp:2026-03-20 08:33:24.295205206 +0000 UTC m=+11.545550721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.115896 master-0 kubenswrapper[4041]: E0320 08:33:27.115710 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9a65d519e1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" in 9.231s (9.231s including waiting). Image size: 943841779 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.340709857 +0000 UTC m=+11.591055372,LastTimestamp:2026-03-20 08:33:24.340709857 +0000 UTC m=+11.591055372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.120730 master-0 kubenswrapper[4041]: E0320 08:33:27.120598 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e7f9a6b89a711 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.436428561 +0000 UTC m=+11.686774066,LastTimestamp:2026-03-20 08:33:24.436428561 +0000 UTC m=+11.686774066,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.124675 master-0 kubenswrapper[4041]: E0320 08:33:27.124536 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e7f9a6c4489a1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.448676257 +0000 UTC m=+11.699021772,LastTimestamp:2026-03-20 08:33:24.448676257 +0000 UTC m=+11.699021772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.131304 master-0 kubenswrapper[4041]: E0320 08:33:27.129197 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9a71645e78 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.53464844 +0000 UTC m=+11.784993945,LastTimestamp:2026-03-20 08:33:24.53464844 +0000 UTC m=+11.784993945,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.135659 master-0 kubenswrapper[4041]: E0320 08:33:27.135560 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9a72421e88 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.549181064 +0000 UTC m=+11.799526569,LastTimestamp:2026-03-20 08:33:24.549181064 +0000 UTC m=+11.799526569,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.139607 master-0 kubenswrapper[4041]: E0320 08:33:27.139488 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9a724c3627 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.549842471 +0000 UTC m=+11.800187976,LastTimestamp:2026-03-20 08:33:24.549842471 +0000 UTC m=+11.800187976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.144300 master-0 kubenswrapper[4041]: E0320 08:33:27.144140 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a74d29dec openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.592205292 +0000 UTC m=+11.842550797,LastTimestamp:2026-03-20 08:33:24.592205292 +0000 UTC m=+11.842550797,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.149090 master-0 kubenswrapper[4041]: E0320 08:33:27.148980 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a827dc005 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.821524485 +0000 UTC m=+12.071870010,LastTimestamp:2026-03-20 08:33:24.821524485 +0000 UTC m=+12.071870010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.153254 master-0 kubenswrapper[4041]: E0320 08:33:27.153168 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a834f105f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.835242079 +0000 UTC m=+12.085587584,LastTimestamp:2026-03-20 08:33:24.835242079 +0000 UTC m=+12.085587584,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.157742 master-0 kubenswrapper[4041]: E0320 08:33:27.157688 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9a835d1c3e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:24.836162622 +0000 UTC m=+12.086508127,LastTimestamp:2026-03-20 08:33:24.836162622 +0000 UTC m=+12.086508127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:27.386337 master-0 kubenswrapper[4041]: I0320 08:33:27.386218 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:27.814384 master-0 kubenswrapper[4041]: I0320 08:33:27.814216 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 08:33:27.829232 master-0 kubenswrapper[4041]: I0320 08:33:27.829188 4041 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 08:33:28.369961 master-0 kubenswrapper[4041]: I0320 08:33:28.369916 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:28.569543 master-0 kubenswrapper[4041]: E0320 08:33:28.569404 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9b61769599 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\" in 4.012s (4.012s including waiting). Image size: 505246690 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:28.562374041 +0000 UTC m=+15.812719546,LastTimestamp:2026-03-20 08:33:28.562374041 +0000 UTC m=+15.812719546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:28.584450 master-0 kubenswrapper[4041]: E0320 08:33:28.584319 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9b625cd8d2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" in 3.741s (3.741s including waiting). Image size: 514984269 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:28.57746453 +0000 UTC m=+15.827810035,LastTimestamp:2026-03-20 08:33:28.57746453 +0000 UTC m=+15.827810035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:28.776468 master-0 kubenswrapper[4041]: E0320 08:33:28.776320 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9b6dd2db12 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:28.76974773 +0000 UTC m=+16.020093245,LastTimestamp:2026-03-20 08:33:28.76974773 +0000 UTC m=+16.020093245,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:28.784292 master-0 kubenswrapper[4041]: E0320 08:33:28.784071 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9b6def9e3a kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:28.771632698 +0000 UTC m=+16.021978213,LastTimestamp:2026-03-20 08:33:28.771632698 +0000 UTC m=+16.021978213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:28.792847 master-0 kubenswrapper[4041]: E0320 08:33:28.792735 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189e7f9b6ea94243 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:49fac1b46a11e49501805e891baae4a9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:28.783798851 +0000 UTC m=+16.034144366,LastTimestamp:2026-03-20 08:33:28.783798851 +0000 UTC m=+16.034144366,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:28.802051 master-0 kubenswrapper[4041]: E0320 08:33:28.801905 4041 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189e7f9b6eb84d39 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:46f265536aba6292ead501bc9b49f327,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:28.784784697 +0000 UTC m=+16.035130212,LastTimestamp:2026-03-20 08:33:28.784784697 +0000 UTC m=+16.035130212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:29.373171 master-0 kubenswrapper[4041]: I0320 08:33:29.373114 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:29.623492 master-0 kubenswrapper[4041]: I0320 08:33:29.623347 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cfd277b4fa13917f4d0cc04f7d6bdc6ea5d4df628ab0e4b86103cf26da62a23f"} Mar 20 08:33:29.623492 master-0 kubenswrapper[4041]: I0320 08:33:29.623382 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:29.625349 master-0 kubenswrapper[4041]: I0320 08:33:29.625301 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:29.625349 master-0 kubenswrapper[4041]: I0320 08:33:29.625347 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:29.625443 master-0 kubenswrapper[4041]: I0320 08:33:29.625360 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:29.628085 master-0 kubenswrapper[4041]: I0320 08:33:29.628052 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898"} Mar 20 08:33:29.628208 master-0 kubenswrapper[4041]: I0320 08:33:29.628175 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:29.629245 master-0 kubenswrapper[4041]: I0320 08:33:29.629216 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:29.629333 master-0 kubenswrapper[4041]: I0320 08:33:29.629257 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:29.629333 master-0 kubenswrapper[4041]: I0320 08:33:29.629286 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:30.078780 master-0 kubenswrapper[4041]: I0320 08:33:30.078155 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:30.375411 master-0 kubenswrapper[4041]: I0320 08:33:30.372478 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:30.630695 master-0 kubenswrapper[4041]: I0320 08:33:30.630619 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:30.630982 master-0 kubenswrapper[4041]: I0320 08:33:30.630628 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:30.631777 master-0 kubenswrapper[4041]: I0320 08:33:30.631710 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:30.631901 master-0 kubenswrapper[4041]: I0320 08:33:30.631806 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:30.631901 master-0 kubenswrapper[4041]: I0320 08:33:30.631712 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:30.631901 master-0 kubenswrapper[4041]: I0320 08:33:30.631833 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:30.631901 master-0 kubenswrapper[4041]: I0320 08:33:30.631876 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:30.632118 master-0 kubenswrapper[4041]: I0320 08:33:30.631906 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:31.371164 master-0 kubenswrapper[4041]: I0320 08:33:31.371076 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:31.632828 master-0 kubenswrapper[4041]: I0320 08:33:31.632709 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:31.633473 master-0 kubenswrapper[4041]: I0320 08:33:31.633433 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:31.633521 master-0 kubenswrapper[4041]: I0320 08:33:31.633485 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:31.633521 master-0 kubenswrapper[4041]: I0320 08:33:31.633498 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:32.372527 master-0 kubenswrapper[4041]: I0320 08:33:32.372408 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:32.535671 master-0 kubenswrapper[4041]: I0320 08:33:32.535549 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:32.537062 master-0 kubenswrapper[4041]: I0320 08:33:32.537007 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:32.537142 master-0 kubenswrapper[4041]: I0320 08:33:32.537070 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:32.537142 master-0 kubenswrapper[4041]: I0320 08:33:32.537090 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:32.537706 master-0 kubenswrapper[4041]: I0320 08:33:32.537667 4041 scope.go:117] "RemoveContainer" containerID="a6df4568a5f97beeb663f1ff1945ac9d3c4c5e1ff252f4e472ff1399d4059194" Mar 20 08:33:32.546175 master-0 kubenswrapper[4041]: E0320 08:33:32.545991 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f98e05c1ced\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98e05c1ced openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.806443757 +0000 UTC m=+5.056789262,LastTimestamp:2026-03-20 08:33:32.539936461 +0000 UTC m=+19.790282016,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:32.554532 master-0 kubenswrapper[4041]: I0320 08:33:32.554473 4041 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:32.554705 master-0 kubenswrapper[4041]: I0320 08:33:32.554663 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:32.555696 master-0 kubenswrapper[4041]: I0320 08:33:32.555638 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:32.555696 master-0 kubenswrapper[4041]: I0320 08:33:32.555692 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:32.555915 master-0 kubenswrapper[4041]: I0320 08:33:32.555713 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:32.562671 master-0 kubenswrapper[4041]: I0320 08:33:32.562626 4041 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:32.635446 master-0 kubenswrapper[4041]: I0320 08:33:32.635250 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:32.635446 master-0 kubenswrapper[4041]: I0320 08:33:32.635429 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:32.636412 master-0 kubenswrapper[4041]: I0320 08:33:32.636104 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:32.636412 master-0 kubenswrapper[4041]: I0320 08:33:32.636135 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:32.636412 master-0 kubenswrapper[4041]: I0320 08:33:32.636147 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:32.639207 master-0 kubenswrapper[4041]: I0320 08:33:32.639154 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:33:32.712439 master-0 kubenswrapper[4041]: W0320 08:33:32.712378 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 20 08:33:32.712606 master-0 kubenswrapper[4041]: E0320 08:33:32.712486 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 20 08:33:32.796972 master-0 kubenswrapper[4041]: E0320 08:33:32.796814 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f98ea0b8182\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98ea0b8182 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.96893325 +0000 UTC m=+5.219278755,LastTimestamp:2026-03-20 08:33:32.788912721 +0000 UTC m=+20.039258256,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:32.895467 master-0 kubenswrapper[4041]: E0320 08:33:32.895295 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f98eb837fbe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f98eb837fbe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:17.993574334 +0000 UTC m=+5.243919839,LastTimestamp:2026-03-20 08:33:32.885472636 +0000 UTC m=+20.135818181,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:33.156573 master-0 kubenswrapper[4041]: W0320 08:33:33.156442 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 20 08:33:33.156867 master-0 kubenswrapper[4041]: E0320 08:33:33.156831 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 20 08:33:33.372884 master-0 kubenswrapper[4041]: I0320 08:33:33.372816 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:33.438073 master-0 kubenswrapper[4041]: W0320 08:33:33.437938 4041 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:33.438468 master-0 kubenswrapper[4041]: E0320 08:33:33.438424 4041 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 20 08:33:33.516142 master-0 kubenswrapper[4041]: E0320 08:33:33.516061 4041 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 20 08:33:33.621599 master-0 kubenswrapper[4041]: I0320 08:33:33.621514 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:33.623401 master-0 kubenswrapper[4041]: I0320 08:33:33.623344 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:33.623674 master-0 kubenswrapper[4041]: I0320 08:33:33.623642 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:33.623878 master-0 kubenswrapper[4041]: I0320 08:33:33.623847 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:33.624136 master-0 kubenswrapper[4041]: I0320 08:33:33.624106 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:33.629518 master-0 kubenswrapper[4041]: E0320 08:33:33.629424 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 08:33:33.629924 master-0 kubenswrapper[4041]: E0320 08:33:33.629869 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 20 08:33:33.640030 master-0 kubenswrapper[4041]: I0320 08:33:33.639994 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 20 08:33:33.641419 master-0 kubenswrapper[4041]: I0320 08:33:33.641390 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/1.log" Mar 20 08:33:33.642490 master-0 kubenswrapper[4041]: I0320 08:33:33.642451 4041 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf" exitCode=1 Mar 20 08:33:33.642754 master-0 kubenswrapper[4041]: I0320 08:33:33.642456 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf"} Mar 20 08:33:33.642890 master-0 kubenswrapper[4041]: I0320 08:33:33.642799 4041 scope.go:117] "RemoveContainer" containerID="a6df4568a5f97beeb663f1ff1945ac9d3c4c5e1ff252f4e472ff1399d4059194" Mar 20 08:33:33.642981 master-0 kubenswrapper[4041]: I0320 08:33:33.642952 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:33.643259 master-0 kubenswrapper[4041]: I0320 08:33:33.643233 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:33.644646 master-0 kubenswrapper[4041]: I0320 08:33:33.644609 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:33.644769 master-0 kubenswrapper[4041]: I0320 08:33:33.644654 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:33.644769 master-0 kubenswrapper[4041]: I0320 08:33:33.644672 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:33.644769 master-0 kubenswrapper[4041]: I0320 08:33:33.644739 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:33.644945 master-0 kubenswrapper[4041]: I0320 08:33:33.644778 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:33.644945 master-0 kubenswrapper[4041]: I0320 08:33:33.644796 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:33.645159 master-0 kubenswrapper[4041]: I0320 08:33:33.645121 4041 scope.go:117] "RemoveContainer" containerID="309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf" Mar 20 08:33:33.646309 master-0 kubenswrapper[4041]: E0320 08:33:33.645749 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 20 08:33:33.649780 master-0 kubenswrapper[4041]: E0320 08:33:33.649600 4041 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189e7f99499507da\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189e7f99499507da openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:1249822f86f23526277d165c0d5d3c19,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:33:19.571781594 +0000 UTC m=+6.822127099,LastTimestamp:2026-03-20 08:33:33.645702578 +0000 UTC m=+20.896048123,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:33:34.373862 master-0 kubenswrapper[4041]: I0320 08:33:34.373745 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:34.647602 master-0 kubenswrapper[4041]: I0320 08:33:34.647464 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 20 08:33:34.648238 master-0 kubenswrapper[4041]: I0320 08:33:34.648199 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:34.649397 master-0 kubenswrapper[4041]: I0320 08:33:34.649341 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:34.649472 master-0 kubenswrapper[4041]: I0320 08:33:34.649419 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:34.649472 master-0 kubenswrapper[4041]: I0320 08:33:34.649443 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:35.373906 master-0 kubenswrapper[4041]: I0320 08:33:35.373791 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:35.913829 master-0 kubenswrapper[4041]: I0320 08:33:35.913740 4041 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:35.914630 master-0 kubenswrapper[4041]: I0320 08:33:35.913931 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:35.915528 master-0 kubenswrapper[4041]: I0320 08:33:35.915469 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:35.915672 master-0 kubenswrapper[4041]: I0320 08:33:35.915540 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:35.915672 master-0 kubenswrapper[4041]: I0320 08:33:35.915565 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:35.921781 master-0 kubenswrapper[4041]: I0320 08:33:35.921720 4041 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:35.964960 master-0 kubenswrapper[4041]: I0320 08:33:35.964841 4041 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:35.971447 master-0 kubenswrapper[4041]: I0320 08:33:35.971385 4041 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:36.373540 master-0 kubenswrapper[4041]: I0320 08:33:36.373481 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:36.653375 master-0 kubenswrapper[4041]: I0320 08:33:36.653180 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:36.653616 master-0 kubenswrapper[4041]: I0320 08:33:36.653382 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:36.654873 master-0 kubenswrapper[4041]: I0320 08:33:36.654810 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:36.655045 master-0 kubenswrapper[4041]: I0320 08:33:36.654923 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:36.655045 master-0 kubenswrapper[4041]: I0320 08:33:36.654971 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:36.658555 master-0 kubenswrapper[4041]: I0320 08:33:36.658513 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:37.373296 master-0 kubenswrapper[4041]: I0320 08:33:37.373221 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:37.655568 master-0 kubenswrapper[4041]: I0320 08:33:37.655473 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:37.656559 master-0 kubenswrapper[4041]: I0320 08:33:37.656495 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:37.656559 master-0 kubenswrapper[4041]: I0320 08:33:37.656559 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:37.656737 master-0 kubenswrapper[4041]: I0320 08:33:37.656584 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:38.090775 master-0 kubenswrapper[4041]: I0320 08:33:38.090546 4041 csr.go:261] certificate signing request csr-rtlzs is approved, waiting to be issued Mar 20 08:33:38.372818 master-0 kubenswrapper[4041]: I0320 08:33:38.372711 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:38.658166 master-0 kubenswrapper[4041]: I0320 08:33:38.658003 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:38.658942 master-0 kubenswrapper[4041]: I0320 08:33:38.658904 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:38.659016 master-0 kubenswrapper[4041]: I0320 08:33:38.658966 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:38.659016 master-0 kubenswrapper[4041]: I0320 08:33:38.658991 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:38.663480 master-0 kubenswrapper[4041]: I0320 08:33:38.663412 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:33:39.373627 master-0 kubenswrapper[4041]: I0320 08:33:39.373538 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:39.660989 master-0 kubenswrapper[4041]: I0320 08:33:39.660804 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:39.662249 master-0 kubenswrapper[4041]: I0320 08:33:39.662194 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:39.662397 master-0 kubenswrapper[4041]: I0320 08:33:39.662256 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:39.662397 master-0 kubenswrapper[4041]: I0320 08:33:39.662315 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:40.372597 master-0 kubenswrapper[4041]: I0320 08:33:40.372549 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:40.630319 master-0 kubenswrapper[4041]: I0320 08:33:40.630147 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:40.631692 master-0 kubenswrapper[4041]: I0320 08:33:40.631644 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:40.631929 master-0 kubenswrapper[4041]: I0320 08:33:40.631900 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:40.632100 master-0 kubenswrapper[4041]: I0320 08:33:40.632074 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:40.632372 master-0 kubenswrapper[4041]: I0320 08:33:40.632344 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:40.638234 master-0 kubenswrapper[4041]: E0320 08:33:40.638194 4041 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 20 08:33:40.638491 master-0 kubenswrapper[4041]: E0320 08:33:40.638370 4041 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 08:33:41.367328 master-0 kubenswrapper[4041]: I0320 08:33:41.367205 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:42.372756 master-0 kubenswrapper[4041]: I0320 08:33:42.372649 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:43.373454 master-0 kubenswrapper[4041]: I0320 08:33:43.373334 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:43.517097 master-0 kubenswrapper[4041]: E0320 08:33:43.516985 4041 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 20 08:33:44.372729 master-0 kubenswrapper[4041]: I0320 08:33:44.372632 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:45.373851 master-0 kubenswrapper[4041]: I0320 08:33:45.373730 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:46.376005 master-0 kubenswrapper[4041]: I0320 08:33:46.375925 4041 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 08:33:46.433707 master-0 kubenswrapper[4041]: I0320 08:33:46.433641 4041 csr.go:257] certificate signing request csr-rtlzs is issued Mar 20 08:33:47.251427 master-0 kubenswrapper[4041]: I0320 08:33:47.251358 4041 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 20 08:33:47.374306 master-0 kubenswrapper[4041]: I0320 08:33:47.374217 4041 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 20 08:33:47.389541 master-0 kubenswrapper[4041]: I0320 08:33:47.389493 4041 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 20 08:33:47.436220 master-0 kubenswrapper[4041]: I0320 08:33:47.436139 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 03:25:25.919464223 +0000 UTC Mar 20 08:33:47.436220 master-0 kubenswrapper[4041]: I0320 08:33:47.436194 4041 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h51m38.483275169s for next certificate rotation Mar 20 08:33:47.445766 master-0 kubenswrapper[4041]: I0320 08:33:47.445720 4041 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 20 08:33:47.639577 master-0 kubenswrapper[4041]: I0320 08:33:47.639508 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:47.641391 master-0 kubenswrapper[4041]: I0320 08:33:47.641351 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:47.641484 master-0 kubenswrapper[4041]: I0320 08:33:47.641416 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:47.641484 master-0 kubenswrapper[4041]: I0320 08:33:47.641447 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:47.641564 master-0 kubenswrapper[4041]: I0320 08:33:47.641529 4041 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:33:47.646713 master-0 kubenswrapper[4041]: E0320 08:33:47.646656 4041 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 20 08:33:47.654215 master-0 kubenswrapper[4041]: I0320 08:33:47.654162 4041 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 20 08:33:47.654333 master-0 kubenswrapper[4041]: E0320 08:33:47.654229 4041 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 20 08:33:47.672756 master-0 kubenswrapper[4041]: E0320 08:33:47.672670 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:47.773001 master-0 kubenswrapper[4041]: E0320 08:33:47.772899 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:47.799814 master-0 kubenswrapper[4041]: I0320 08:33:47.799749 4041 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 08:33:47.873537 master-0 kubenswrapper[4041]: E0320 08:33:47.873467 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:47.974342 master-0 kubenswrapper[4041]: E0320 08:33:47.974124 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.074988 master-0 kubenswrapper[4041]: E0320 08:33:48.074896 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.176109 master-0 kubenswrapper[4041]: E0320 08:33:48.176025 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.276289 master-0 kubenswrapper[4041]: E0320 08:33:48.276208 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.377186 master-0 kubenswrapper[4041]: E0320 08:33:48.376798 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.394701 master-0 kubenswrapper[4041]: I0320 08:33:48.394612 4041 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 20 08:33:48.405811 master-0 kubenswrapper[4041]: I0320 08:33:48.405735 4041 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 08:33:48.477065 master-0 kubenswrapper[4041]: E0320 08:33:48.476979 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.537755 master-0 kubenswrapper[4041]: I0320 08:33:48.536211 4041 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:33:48.537755 master-0 kubenswrapper[4041]: I0320 08:33:48.537506 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:33:48.537755 master-0 kubenswrapper[4041]: I0320 08:33:48.537533 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:33:48.537755 master-0 kubenswrapper[4041]: I0320 08:33:48.537546 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:33:48.537998 master-0 kubenswrapper[4041]: I0320 08:33:48.537895 4041 scope.go:117] "RemoveContainer" containerID="309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf" Mar 20 08:33:48.538084 master-0 kubenswrapper[4041]: E0320 08:33:48.538055 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(1249822f86f23526277d165c0d5d3c19)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="1249822f86f23526277d165c0d5d3c19" Mar 20 08:33:48.577527 master-0 kubenswrapper[4041]: E0320 08:33:48.577431 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.643276 master-0 kubenswrapper[4041]: I0320 08:33:48.643162 4041 csr.go:261] certificate signing request csr-kfwvn is approved, waiting to be issued Mar 20 08:33:48.651650 master-0 kubenswrapper[4041]: I0320 08:33:48.651606 4041 csr.go:257] certificate signing request csr-kfwvn is issued Mar 20 08:33:48.678399 master-0 kubenswrapper[4041]: E0320 08:33:48.678331 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.778479 master-0 kubenswrapper[4041]: E0320 08:33:48.778418 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.879678 master-0 kubenswrapper[4041]: E0320 08:33:48.879596 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:48.980480 master-0 kubenswrapper[4041]: E0320 08:33:48.980374 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.081215 master-0 kubenswrapper[4041]: E0320 08:33:49.081139 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.182316 master-0 kubenswrapper[4041]: E0320 08:33:49.182103 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.282995 master-0 kubenswrapper[4041]: E0320 08:33:49.282922 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.383589 master-0 kubenswrapper[4041]: E0320 08:33:49.383525 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.484743 master-0 kubenswrapper[4041]: E0320 08:33:49.484558 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.585471 master-0 kubenswrapper[4041]: E0320 08:33:49.585355 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.653527 master-0 kubenswrapper[4041]: I0320 08:33:49.653388 4041 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 01:48:33.845838614 +0000 UTC Mar 20 08:33:49.653527 master-0 kubenswrapper[4041]: I0320 08:33:49.653463 4041 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h14m44.192381597s for next certificate rotation Mar 20 08:33:49.686139 master-0 kubenswrapper[4041]: E0320 08:33:49.686018 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.786612 master-0 kubenswrapper[4041]: E0320 08:33:49.786473 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.887700 master-0 kubenswrapper[4041]: E0320 08:33:49.887630 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:49.988672 master-0 kubenswrapper[4041]: E0320 08:33:49.988603 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.089746 master-0 kubenswrapper[4041]: E0320 08:33:50.089594 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.190656 master-0 kubenswrapper[4041]: E0320 08:33:50.190560 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.291590 master-0 kubenswrapper[4041]: E0320 08:33:50.291508 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.391715 master-0 kubenswrapper[4041]: E0320 08:33:50.391624 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.492861 master-0 kubenswrapper[4041]: E0320 08:33:50.492733 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.592944 master-0 kubenswrapper[4041]: E0320 08:33:50.592875 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.654602 master-0 kubenswrapper[4041]: I0320 08:33:50.654435 4041 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 03:39:56.454903021 +0000 UTC Mar 20 08:33:50.654602 master-0 kubenswrapper[4041]: I0320 08:33:50.654516 4041 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 19h6m5.800390273s for next certificate rotation Mar 20 08:33:50.693742 master-0 kubenswrapper[4041]: E0320 08:33:50.693653 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.793981 master-0 kubenswrapper[4041]: E0320 08:33:50.793887 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.894191 master-0 kubenswrapper[4041]: E0320 08:33:50.894113 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:50.994638 master-0 kubenswrapper[4041]: E0320 08:33:50.994485 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.095455 master-0 kubenswrapper[4041]: E0320 08:33:51.095379 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.196212 master-0 kubenswrapper[4041]: E0320 08:33:51.196132 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.296707 master-0 kubenswrapper[4041]: E0320 08:33:51.296556 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.396979 master-0 kubenswrapper[4041]: E0320 08:33:51.396929 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.498008 master-0 kubenswrapper[4041]: E0320 08:33:51.497940 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.599158 master-0 kubenswrapper[4041]: E0320 08:33:51.599101 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.700080 master-0 kubenswrapper[4041]: E0320 08:33:51.699967 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.800447 master-0 kubenswrapper[4041]: E0320 08:33:51.800371 4041 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 20 08:33:51.808100 master-0 kubenswrapper[4041]: I0320 08:33:51.808025 4041 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 08:33:52.381227 master-0 kubenswrapper[4041]: I0320 08:33:52.381103 4041 apiserver.go:52] "Watching apiserver" Mar 20 08:33:52.386403 master-0 kubenswrapper[4041]: I0320 08:33:52.386335 4041 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 08:33:52.386704 master-0 kubenswrapper[4041]: I0320 08:33:52.386657 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["assisted-installer/assisted-installer-controller-j6hxl","openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4","openshift-network-operator/network-operator-7bd846bfc4-x4w25"] Mar 20 08:33:52.387212 master-0 kubenswrapper[4041]: I0320 08:33:52.387152 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.387366 master-0 kubenswrapper[4041]: I0320 08:33:52.387308 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.387613 master-0 kubenswrapper[4041]: I0320 08:33:52.387474 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.390202 master-0 kubenswrapper[4041]: I0320 08:33:52.390167 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 08:33:52.390999 master-0 kubenswrapper[4041]: I0320 08:33:52.390967 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 08:33:52.391175 master-0 kubenswrapper[4041]: I0320 08:33:52.391125 4041 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 20 08:33:52.391175 master-0 kubenswrapper[4041]: I0320 08:33:52.391144 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 08:33:52.391392 master-0 kubenswrapper[4041]: I0320 08:33:52.391010 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 08:33:52.391481 master-0 kubenswrapper[4041]: I0320 08:33:52.391020 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 08:33:52.391481 master-0 kubenswrapper[4041]: I0320 08:33:52.391391 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 08:33:52.392022 master-0 kubenswrapper[4041]: I0320 08:33:52.391956 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 20 08:33:52.393356 master-0 kubenswrapper[4041]: I0320 08:33:52.393301 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 20 08:33:52.394935 master-0 kubenswrapper[4041]: I0320 08:33:52.394893 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 20 08:33:52.473104 master-0 kubenswrapper[4041]: I0320 08:33:52.473046 4041 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 20 08:33:52.540543 master-0 kubenswrapper[4041]: I0320 08:33:52.540456 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.540543 master-0 kubenswrapper[4041]: I0320 08:33:52.540505 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-sno-bootstrap-files\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.540543 master-0 kubenswrapper[4041]: I0320 08:33:52.540533 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.540543 master-0 kubenswrapper[4041]: I0320 08:33:52.540563 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.540588 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-resolv-conf\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.540613 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.540727 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thdwl\" (UniqueName: \"kubernetes.io/projected/2a25b643-c08d-462f-80f4-8a4feb1e26e8-kube-api-access-thdwl\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.540824 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.540913 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-ca-bundle\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.540959 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.541004 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.541092 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.541694 master-0 kubenswrapper[4041]: I0320 08:33:52.541181 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.598530 master-0 kubenswrapper[4041]: I0320 08:33:52.598407 4041 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 08:33:52.642596 master-0 kubenswrapper[4041]: I0320 08:33:52.642393 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-sno-bootstrap-files\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.642596 master-0 kubenswrapper[4041]: I0320 08:33:52.642512 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.642596 master-0 kubenswrapper[4041]: I0320 08:33:52.642575 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.642926 master-0 kubenswrapper[4041]: I0320 08:33:52.642610 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-sno-bootstrap-files\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.642926 master-0 kubenswrapper[4041]: I0320 08:33:52.642634 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-resolv-conf\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.642926 master-0 kubenswrapper[4041]: I0320 08:33:52.642723 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.642926 master-0 kubenswrapper[4041]: I0320 08:33:52.642757 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-resolv-conf\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.642937 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.642977 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thdwl\" (UniqueName: \"kubernetes.io/projected/2a25b643-c08d-462f-80f4-8a4feb1e26e8-kube-api-access-thdwl\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.643014 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.643044 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.643051 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-ca-bundle\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.643111 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-ca-bundle\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.643127 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.643257 master-0 kubenswrapper[4041]: I0320 08:33:52.643137 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: I0320 08:33:52.643315 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: I0320 08:33:52.643348 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: I0320 08:33:52.643383 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: E0320 08:33:52.643416 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: I0320 08:33:52.643474 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: I0320 08:33:52.643485 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-var-run-resolv-conf\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.643715 master-0 kubenswrapper[4041]: E0320 08:33:52.643579 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:33:53.143533284 +0000 UTC m=+40.393878839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:52.644572 master-0 kubenswrapper[4041]: I0320 08:33:52.644516 4041 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 08:33:52.645506 master-0 kubenswrapper[4041]: I0320 08:33:52.645408 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.653296 master-0 kubenswrapper[4041]: I0320 08:33:52.653144 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.669765 master-0 kubenswrapper[4041]: I0320 08:33:52.669599 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.677371 master-0 kubenswrapper[4041]: I0320 08:33:52.677252 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thdwl\" (UniqueName: \"kubernetes.io/projected/2a25b643-c08d-462f-80f4-8a4feb1e26e8-kube-api-access-thdwl\") pod \"assisted-installer-controller-j6hxl\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.680130 master-0 kubenswrapper[4041]: I0320 08:33:52.680048 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:52.713544 master-0 kubenswrapper[4041]: I0320 08:33:52.713418 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:33:52.758939 master-0 kubenswrapper[4041]: I0320 08:33:52.758820 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:33:52.772096 master-0 kubenswrapper[4041]: W0320 08:33:52.772016 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a25b643_c08d_462f_80f4_8a4feb1e26e8.slice/crio-8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093 WatchSource:0}: Error finding container 8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093: Status 404 returned error can't find the container with id 8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093 Mar 20 08:33:53.147942 master-0 kubenswrapper[4041]: I0320 08:33:53.147861 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:53.148150 master-0 kubenswrapper[4041]: E0320 08:33:53.148057 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:53.148150 master-0 kubenswrapper[4041]: E0320 08:33:53.148134 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:33:54.148109674 +0000 UTC m=+41.398455219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:53.698957 master-0 kubenswrapper[4041]: I0320 08:33:53.698865 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-j6hxl" event={"ID":"2a25b643-c08d-462f-80f4-8a4feb1e26e8","Type":"ContainerStarted","Data":"8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093"} Mar 20 08:33:53.700473 master-0 kubenswrapper[4041]: I0320 08:33:53.700420 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerStarted","Data":"f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604"} Mar 20 08:33:54.156795 master-0 kubenswrapper[4041]: I0320 08:33:54.156713 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:54.157110 master-0 kubenswrapper[4041]: E0320 08:33:54.156927 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:54.157110 master-0 kubenswrapper[4041]: E0320 08:33:54.157011 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:33:56.156983317 +0000 UTC m=+43.407328812 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:56.170711 master-0 kubenswrapper[4041]: I0320 08:33:56.170188 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:33:56.170711 master-0 kubenswrapper[4041]: E0320 08:33:56.170488 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:56.171298 master-0 kubenswrapper[4041]: E0320 08:33:56.170802 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:00.170773821 +0000 UTC m=+47.421119366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:33:57.598918 master-0 kubenswrapper[4041]: I0320 08:33:57.598856 4041 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 08:33:59.718186 master-0 kubenswrapper[4041]: I0320 08:33:59.718105 4041 generic.go:334] "Generic (PLEG): container finished" podID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerID="9c4160ccfce4a1ed7d4a8b39bc1968845b7b8a2ab8792b3e93cfa7765e5fa689" exitCode=0 Mar 20 08:33:59.733350 master-0 kubenswrapper[4041]: I0320 08:33:59.718178 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-j6hxl" event={"ID":"2a25b643-c08d-462f-80f4-8a4feb1e26e8","Type":"ContainerDied","Data":"9c4160ccfce4a1ed7d4a8b39bc1968845b7b8a2ab8792b3e93cfa7765e5fa689"} Mar 20 08:33:59.733350 master-0 kubenswrapper[4041]: I0320 08:33:59.720193 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerStarted","Data":"51d5ff19316ba50d65a137d07edaf8d44d3c66d7ea87669b610c77e6e7a5026d"} Mar 20 08:34:00.042122 master-0 kubenswrapper[4041]: I0320 08:34:00.041945 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" podStartSLOduration=4.028080875 podStartE2EDuration="10.041915827s" podCreationTimestamp="2026-03-20 08:33:50 +0000 UTC" firstStartedPulling="2026-03-20 08:33:52.733644324 +0000 UTC m=+39.983989849" lastFinishedPulling="2026-03-20 08:33:58.747479276 +0000 UTC m=+45.997824801" observedRunningTime="2026-03-20 08:34:00.041775014 +0000 UTC m=+47.292120609" watchObservedRunningTime="2026-03-20 08:34:00.041915827 +0000 UTC m=+47.292261372" Mar 20 08:34:00.223395 master-0 kubenswrapper[4041]: I0320 08:34:00.223282 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:34:00.223660 master-0 kubenswrapper[4041]: E0320 08:34:00.223423 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:00.223660 master-0 kubenswrapper[4041]: E0320 08:34:00.223507 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:08.223482134 +0000 UTC m=+55.473827659 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:00.746445 master-0 kubenswrapper[4041]: I0320 08:34:00.746376 4041 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:34:00.933674 master-0 kubenswrapper[4041]: I0320 08:34:00.933520 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-sno-bootstrap-files\") pod \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " Mar 20 08:34:00.933674 master-0 kubenswrapper[4041]: I0320 08:34:00.933598 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thdwl\" (UniqueName: \"kubernetes.io/projected/2a25b643-c08d-462f-80f4-8a4feb1e26e8-kube-api-access-thdwl\") pod \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " Mar 20 08:34:00.933674 master-0 kubenswrapper[4041]: I0320 08:34:00.933635 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-ca-bundle\") pod \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " Mar 20 08:34:00.933674 master-0 kubenswrapper[4041]: I0320 08:34:00.933672 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-resolv-conf\") pod \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933683 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "2a25b643-c08d-462f-80f4-8a4feb1e26e8" (UID: "2a25b643-c08d-462f-80f4-8a4feb1e26e8"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933712 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-var-run-resolv-conf\") pod \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\" (UID: \"2a25b643-c08d-462f-80f4-8a4feb1e26e8\") " Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933782 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "2a25b643-c08d-462f-80f4-8a4feb1e26e8" (UID: "2a25b643-c08d-462f-80f4-8a4feb1e26e8"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933810 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "2a25b643-c08d-462f-80f4-8a4feb1e26e8" (UID: "2a25b643-c08d-462f-80f4-8a4feb1e26e8"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933833 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "2a25b643-c08d-462f-80f4-8a4feb1e26e8" (UID: "2a25b643-c08d-462f-80f4-8a4feb1e26e8"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933895 4041 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933922 4041 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933949 4041 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:00.934077 master-0 kubenswrapper[4041]: I0320 08:34:00.933966 4041 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2a25b643-c08d-462f-80f4-8a4feb1e26e8-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:00.938721 master-0 kubenswrapper[4041]: I0320 08:34:00.938651 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a25b643-c08d-462f-80f4-8a4feb1e26e8-kube-api-access-thdwl" (OuterVolumeSpecName: "kube-api-access-thdwl") pod "2a25b643-c08d-462f-80f4-8a4feb1e26e8" (UID: "2a25b643-c08d-462f-80f4-8a4feb1e26e8"). InnerVolumeSpecName "kube-api-access-thdwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:34:01.035440 master-0 kubenswrapper[4041]: I0320 08:34:01.035308 4041 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thdwl\" (UniqueName: \"kubernetes.io/projected/2a25b643-c08d-462f-80f4-8a4feb1e26e8-kube-api-access-thdwl\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:01.709869 master-0 kubenswrapper[4041]: I0320 08:34:01.709778 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-6qnrt"] Mar 20 08:34:01.710158 master-0 kubenswrapper[4041]: E0320 08:34:01.709925 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:34:01.710158 master-0 kubenswrapper[4041]: I0320 08:34:01.709950 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:34:01.710158 master-0 kubenswrapper[4041]: I0320 08:34:01.709997 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:34:01.710412 master-0 kubenswrapper[4041]: I0320 08:34:01.710335 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:01.728865 master-0 kubenswrapper[4041]: I0320 08:34:01.728781 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-j6hxl" event={"ID":"2a25b643-c08d-462f-80f4-8a4feb1e26e8","Type":"ContainerDied","Data":"8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093"} Mar 20 08:34:01.728865 master-0 kubenswrapper[4041]: I0320 08:34:01.728843 4041 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093" Mar 20 08:34:01.729154 master-0 kubenswrapper[4041]: I0320 08:34:01.728906 4041 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:34:01.840945 master-0 kubenswrapper[4041]: I0320 08:34:01.840764 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6njm\" (UniqueName: \"kubernetes.io/projected/31e4700c-9389-427e-95ef-187f80c9e607-kube-api-access-k6njm\") pod \"mtu-prober-6qnrt\" (UID: \"31e4700c-9389-427e-95ef-187f80c9e607\") " pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:01.941468 master-0 kubenswrapper[4041]: I0320 08:34:01.941249 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6njm\" (UniqueName: \"kubernetes.io/projected/31e4700c-9389-427e-95ef-187f80c9e607-kube-api-access-k6njm\") pod \"mtu-prober-6qnrt\" (UID: \"31e4700c-9389-427e-95ef-187f80c9e607\") " pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:01.964635 master-0 kubenswrapper[4041]: I0320 08:34:01.964546 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6njm\" (UniqueName: \"kubernetes.io/projected/31e4700c-9389-427e-95ef-187f80c9e607-kube-api-access-k6njm\") pod \"mtu-prober-6qnrt\" (UID: \"31e4700c-9389-427e-95ef-187f80c9e607\") " pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:02.032502 master-0 kubenswrapper[4041]: I0320 08:34:02.032393 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:02.052399 master-0 kubenswrapper[4041]: W0320 08:34:02.052346 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31e4700c_9389_427e_95ef_187f80c9e607.slice/crio-8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe WatchSource:0}: Error finding container 8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe: Status 404 returned error can't find the container with id 8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe Mar 20 08:34:02.551458 master-0 kubenswrapper[4041]: I0320 08:34:02.551293 4041 scope.go:117] "RemoveContainer" containerID="309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf" Mar 20 08:34:02.551749 master-0 kubenswrapper[4041]: I0320 08:34:02.551687 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 20 08:34:02.734071 master-0 kubenswrapper[4041]: I0320 08:34:02.734020 4041 generic.go:334] "Generic (PLEG): container finished" podID="31e4700c-9389-427e-95ef-187f80c9e607" containerID="c7aa165c0986788c15e1247a68719a95f704ec935f16e843c43124bc75fd9639" exitCode=0 Mar 20 08:34:02.734161 master-0 kubenswrapper[4041]: I0320 08:34:02.734082 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-6qnrt" event={"ID":"31e4700c-9389-427e-95ef-187f80c9e607","Type":"ContainerDied","Data":"c7aa165c0986788c15e1247a68719a95f704ec935f16e843c43124bc75fd9639"} Mar 20 08:34:02.734161 master-0 kubenswrapper[4041]: I0320 08:34:02.734117 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-6qnrt" event={"ID":"31e4700c-9389-427e-95ef-187f80c9e607","Type":"ContainerStarted","Data":"8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe"} Mar 20 08:34:03.740362 master-0 kubenswrapper[4041]: I0320 08:34:03.740209 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 20 08:34:03.742181 master-0 kubenswrapper[4041]: I0320 08:34:03.742092 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"7316fd0f0f8a186ef4fb758bcbe38162f541b908e7728b02280dc9e29c6d0538"} Mar 20 08:34:03.759036 master-0 kubenswrapper[4041]: I0320 08:34:03.758964 4041 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:03.856334 master-0 kubenswrapper[4041]: I0320 08:34:03.856206 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.8561758149999998 podStartE2EDuration="1.856175815s" podCreationTimestamp="2026-03-20 08:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:03.829505465 +0000 UTC m=+51.079851040" watchObservedRunningTime="2026-03-20 08:34:03.856175815 +0000 UTC m=+51.106521350" Mar 20 08:34:03.958009 master-0 kubenswrapper[4041]: I0320 08:34:03.957901 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6njm\" (UniqueName: \"kubernetes.io/projected/31e4700c-9389-427e-95ef-187f80c9e607-kube-api-access-k6njm\") pod \"31e4700c-9389-427e-95ef-187f80c9e607\" (UID: \"31e4700c-9389-427e-95ef-187f80c9e607\") " Mar 20 08:34:03.963418 master-0 kubenswrapper[4041]: I0320 08:34:03.963342 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31e4700c-9389-427e-95ef-187f80c9e607-kube-api-access-k6njm" (OuterVolumeSpecName: "kube-api-access-k6njm") pod "31e4700c-9389-427e-95ef-187f80c9e607" (UID: "31e4700c-9389-427e-95ef-187f80c9e607"). InnerVolumeSpecName "kube-api-access-k6njm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:34:04.058613 master-0 kubenswrapper[4041]: I0320 08:34:04.058432 4041 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6njm\" (UniqueName: \"kubernetes.io/projected/31e4700c-9389-427e-95ef-187f80c9e607-kube-api-access-k6njm\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:04.747822 master-0 kubenswrapper[4041]: I0320 08:34:04.747739 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-6qnrt" event={"ID":"31e4700c-9389-427e-95ef-187f80c9e607","Type":"ContainerDied","Data":"8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe"} Mar 20 08:34:04.747822 master-0 kubenswrapper[4041]: I0320 08:34:04.747771 4041 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-6qnrt" Mar 20 08:34:04.747822 master-0 kubenswrapper[4041]: I0320 08:34:04.747805 4041 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe" Mar 20 08:34:06.722235 master-0 kubenswrapper[4041]: I0320 08:34:06.722179 4041 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-6qnrt"] Mar 20 08:34:06.728042 master-0 kubenswrapper[4041]: I0320 08:34:06.727980 4041 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-6qnrt"] Mar 20 08:34:07.541054 master-0 kubenswrapper[4041]: I0320 08:34:07.540595 4041 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31e4700c-9389-427e-95ef-187f80c9e607" path="/var/lib/kubelet/pods/31e4700c-9389-427e-95ef-187f80c9e607/volumes" Mar 20 08:34:08.285566 master-0 kubenswrapper[4041]: I0320 08:34:08.285487 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:34:08.286670 master-0 kubenswrapper[4041]: E0320 08:34:08.285694 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:08.286670 master-0 kubenswrapper[4041]: E0320 08:34:08.285796 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:24.285763469 +0000 UTC m=+71.536109004 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:11.608768 master-0 kubenswrapper[4041]: I0320 08:34:11.608627 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-pxqwj"] Mar 20 08:34:11.609861 master-0 kubenswrapper[4041]: E0320 08:34:11.608885 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e4700c-9389-427e-95ef-187f80c9e607" containerName="prober" Mar 20 08:34:11.609861 master-0 kubenswrapper[4041]: I0320 08:34:11.608909 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e4700c-9389-427e-95ef-187f80c9e607" containerName="prober" Mar 20 08:34:11.609861 master-0 kubenswrapper[4041]: I0320 08:34:11.608987 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e4700c-9389-427e-95ef-187f80c9e607" containerName="prober" Mar 20 08:34:11.610312 master-0 kubenswrapper[4041]: I0320 08:34:11.610222 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.614132 master-0 kubenswrapper[4041]: I0320 08:34:11.614090 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 08:34:11.614657 master-0 kubenswrapper[4041]: I0320 08:34:11.614634 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 08:34:11.616429 master-0 kubenswrapper[4041]: I0320 08:34:11.616371 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 08:34:11.616519 master-0 kubenswrapper[4041]: I0320 08:34:11.616443 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 08:34:11.715621 master-0 kubenswrapper[4041]: I0320 08:34:11.715469 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.715892 master-0 kubenswrapper[4041]: I0320 08:34:11.715696 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.715941 master-0 kubenswrapper[4041]: I0320 08:34:11.715887 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.715983 master-0 kubenswrapper[4041]: I0320 08:34:11.715947 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716025 master-0 kubenswrapper[4041]: I0320 08:34:11.715980 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716133 master-0 kubenswrapper[4041]: I0320 08:34:11.716087 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716189 master-0 kubenswrapper[4041]: I0320 08:34:11.716136 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716189 master-0 kubenswrapper[4041]: I0320 08:34:11.716177 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716335 master-0 kubenswrapper[4041]: I0320 08:34:11.716210 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716392 master-0 kubenswrapper[4041]: I0320 08:34:11.716350 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716436 master-0 kubenswrapper[4041]: I0320 08:34:11.716387 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716436 master-0 kubenswrapper[4041]: I0320 08:34:11.716421 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716517 master-0 kubenswrapper[4041]: I0320 08:34:11.716476 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716694 master-0 kubenswrapper[4041]: I0320 08:34:11.716617 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716743 master-0 kubenswrapper[4041]: I0320 08:34:11.716713 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716787 master-0 kubenswrapper[4041]: I0320 08:34:11.716752 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.716834 master-0 kubenswrapper[4041]: I0320 08:34:11.716792 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.811399 master-0 kubenswrapper[4041]: I0320 08:34:11.811245 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-x7vrg"] Mar 20 08:34:11.812235 master-0 kubenswrapper[4041]: I0320 08:34:11.812174 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.815593 master-0 kubenswrapper[4041]: I0320 08:34:11.815526 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 20 08:34:11.815895 master-0 kubenswrapper[4041]: I0320 08:34:11.815526 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 08:34:11.817520 master-0 kubenswrapper[4041]: I0320 08:34:11.817423 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.817520 master-0 kubenswrapper[4041]: I0320 08:34:11.817509 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.817759 master-0 kubenswrapper[4041]: I0320 08:34:11.817533 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.817759 master-0 kubenswrapper[4041]: I0320 08:34:11.817587 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.817759 master-0 kubenswrapper[4041]: I0320 08:34:11.817632 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.817759 master-0 kubenswrapper[4041]: I0320 08:34:11.817672 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.817759 master-0 kubenswrapper[4041]: I0320 08:34:11.817708 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818157 master-0 kubenswrapper[4041]: I0320 08:34:11.817863 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.818157 master-0 kubenswrapper[4041]: I0320 08:34:11.817904 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.818157 master-0 kubenswrapper[4041]: I0320 08:34:11.817945 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.818157 master-0 kubenswrapper[4041]: I0320 08:34:11.817983 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818157 master-0 kubenswrapper[4041]: I0320 08:34:11.818085 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818160 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818244 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818327 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818387 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818438 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818486 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818529 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.818593 master-0 kubenswrapper[4041]: I0320 08:34:11.818572 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818617 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818664 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818713 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818757 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818806 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818854 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.818903 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.819020 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.819087 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.819160 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.819181 master-0 kubenswrapper[4041]: I0320 08:34:11.819187 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.821791 master-0 kubenswrapper[4041]: I0320 08:34:11.821403 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.821791 master-0 kubenswrapper[4041]: I0320 08:34:11.821491 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.821791 master-0 kubenswrapper[4041]: I0320 08:34:11.821605 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.821791 master-0 kubenswrapper[4041]: I0320 08:34:11.821729 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.822110 master-0 kubenswrapper[4041]: I0320 08:34:11.821840 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.822110 master-0 kubenswrapper[4041]: I0320 08:34:11.821898 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.822406 master-0 kubenswrapper[4041]: I0320 08:34:11.822322 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.822406 master-0 kubenswrapper[4041]: I0320 08:34:11.822374 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.823107 master-0 kubenswrapper[4041]: I0320 08:34:11.823034 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.823231 master-0 kubenswrapper[4041]: I0320 08:34:11.821778 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.851809 master-0 kubenswrapper[4041]: I0320 08:34:11.851704 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.919972 master-0 kubenswrapper[4041]: I0320 08:34:11.919809 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.919972 master-0 kubenswrapper[4041]: I0320 08:34:11.919885 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920227 master-0 kubenswrapper[4041]: I0320 08:34:11.919962 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920227 master-0 kubenswrapper[4041]: I0320 08:34:11.920018 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920227 master-0 kubenswrapper[4041]: I0320 08:34:11.920069 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920227 master-0 kubenswrapper[4041]: I0320 08:34:11.920125 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920227 master-0 kubenswrapper[4041]: I0320 08:34:11.920172 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920227 master-0 kubenswrapper[4041]: I0320 08:34:11.920220 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920796 master-0 kubenswrapper[4041]: I0320 08:34:11.920380 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920796 master-0 kubenswrapper[4041]: I0320 08:34:11.920493 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920796 master-0 kubenswrapper[4041]: I0320 08:34:11.920517 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.920796 master-0 kubenswrapper[4041]: I0320 08:34:11.920645 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.922051 master-0 kubenswrapper[4041]: I0320 08:34:11.921971 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.922194 master-0 kubenswrapper[4041]: I0320 08:34:11.922043 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.922194 master-0 kubenswrapper[4041]: I0320 08:34:11.922085 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.935371 master-0 kubenswrapper[4041]: I0320 08:34:11.935319 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-pxqwj" Mar 20 08:34:11.946144 master-0 kubenswrapper[4041]: I0320 08:34:11.946072 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:11.954364 master-0 kubenswrapper[4041]: W0320 08:34:11.954294 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7949621e_4da6_4e43_a1f3_2ef303bf6aa6.slice/crio-f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83 WatchSource:0}: Error finding container f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83: Status 404 returned error can't find the container with id f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83 Mar 20 08:34:12.132981 master-0 kubenswrapper[4041]: I0320 08:34:12.132840 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:34:12.149390 master-0 kubenswrapper[4041]: W0320 08:34:12.149315 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22ff82cf_0d7d_4955_9b7c_97757acbc021.slice/crio-c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7 WatchSource:0}: Error finding container c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7: Status 404 returned error can't find the container with id c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7 Mar 20 08:34:12.589433 master-0 kubenswrapper[4041]: I0320 08:34:12.589106 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-nfrth"] Mar 20 08:34:12.589721 master-0 kubenswrapper[4041]: I0320 08:34:12.589596 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:12.589829 master-0 kubenswrapper[4041]: E0320 08:34:12.589700 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:12.626300 master-0 kubenswrapper[4041]: I0320 08:34:12.626176 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:12.626300 master-0 kubenswrapper[4041]: I0320 08:34:12.626293 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:12.727158 master-0 kubenswrapper[4041]: I0320 08:34:12.727086 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:12.727524 master-0 kubenswrapper[4041]: E0320 08:34:12.727336 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:12.727524 master-0 kubenswrapper[4041]: E0320 08:34:12.727448 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:13.22741504 +0000 UTC m=+60.477760575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:12.727524 master-0 kubenswrapper[4041]: I0320 08:34:12.727342 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:12.760303 master-0 kubenswrapper[4041]: I0320 08:34:12.760214 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:12.770853 master-0 kubenswrapper[4041]: I0320 08:34:12.770771 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerStarted","Data":"c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7"} Mar 20 08:34:12.773915 master-0 kubenswrapper[4041]: I0320 08:34:12.773848 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxqwj" event={"ID":"7949621e-4da6-4e43-a1f3-2ef303bf6aa6","Type":"ContainerStarted","Data":"f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83"} Mar 20 08:34:13.231505 master-0 kubenswrapper[4041]: I0320 08:34:13.231415 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:13.231801 master-0 kubenswrapper[4041]: E0320 08:34:13.231635 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:13.231801 master-0 kubenswrapper[4041]: E0320 08:34:13.231757 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:14.23172646 +0000 UTC m=+61.482072005 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:13.957131 master-0 kubenswrapper[4041]: I0320 08:34:13.957004 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:13.957809 master-0 kubenswrapper[4041]: E0320 08:34:13.957743 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:14.255595 master-0 kubenswrapper[4041]: I0320 08:34:14.255466 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:14.255867 master-0 kubenswrapper[4041]: E0320 08:34:14.255603 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:14.255867 master-0 kubenswrapper[4041]: E0320 08:34:14.255658 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:16.255643798 +0000 UTC m=+63.505989303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:15.535694 master-0 kubenswrapper[4041]: I0320 08:34:15.535653 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:15.536320 master-0 kubenswrapper[4041]: E0320 08:34:15.535777 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:16.273603 master-0 kubenswrapper[4041]: I0320 08:34:16.273566 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:16.273760 master-0 kubenswrapper[4041]: E0320 08:34:16.273704 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:16.273809 master-0 kubenswrapper[4041]: E0320 08:34:16.273767 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:20.273749609 +0000 UTC m=+67.524095114 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:16.966289 master-0 kubenswrapper[4041]: I0320 08:34:16.966206 4041 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="40ff7a57f1be617cf7f13a7b182aa09a2d94c4736efa61da1185a107268ed08d" exitCode=0 Mar 20 08:34:16.966289 master-0 kubenswrapper[4041]: I0320 08:34:16.966290 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"40ff7a57f1be617cf7f13a7b182aa09a2d94c4736efa61da1185a107268ed08d"} Mar 20 08:34:17.535557 master-0 kubenswrapper[4041]: I0320 08:34:17.535486 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:17.535927 master-0 kubenswrapper[4041]: E0320 08:34:17.535693 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:19.535788 master-0 kubenswrapper[4041]: I0320 08:34:19.535673 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:19.537750 master-0 kubenswrapper[4041]: E0320 08:34:19.535932 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:20.308554 master-0 kubenswrapper[4041]: I0320 08:34:20.308483 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:20.309092 master-0 kubenswrapper[4041]: E0320 08:34:20.308682 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:20.309092 master-0 kubenswrapper[4041]: E0320 08:34:20.308779 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:28.30875555 +0000 UTC m=+75.559101065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:21.534957 master-0 kubenswrapper[4041]: I0320 08:34:21.534858 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:21.535685 master-0 kubenswrapper[4041]: E0320 08:34:21.535015 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:23.535494 master-0 kubenswrapper[4041]: I0320 08:34:23.535392 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:23.536938 master-0 kubenswrapper[4041]: E0320 08:34:23.536811 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:24.001112 master-0 kubenswrapper[4041]: I0320 08:34:24.000861 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk"] Mar 20 08:34:24.001518 master-0 kubenswrapper[4041]: I0320 08:34:24.001473 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.004583 master-0 kubenswrapper[4041]: I0320 08:34:24.004546 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 08:34:24.004757 master-0 kubenswrapper[4041]: I0320 08:34:24.004728 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 08:34:24.004864 master-0 kubenswrapper[4041]: I0320 08:34:24.004845 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 08:34:24.006718 master-0 kubenswrapper[4041]: I0320 08:34:24.005291 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 08:34:24.006718 master-0 kubenswrapper[4041]: I0320 08:34:24.005491 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 08:34:24.134788 master-0 kubenswrapper[4041]: I0320 08:34:24.134726 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.134788 master-0 kubenswrapper[4041]: I0320 08:34:24.134780 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.134788 master-0 kubenswrapper[4041]: I0320 08:34:24.134800 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.135146 master-0 kubenswrapper[4041]: I0320 08:34:24.134867 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.201026 master-0 kubenswrapper[4041]: I0320 08:34:24.200728 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87r4t"] Mar 20 08:34:24.212030 master-0 kubenswrapper[4041]: I0320 08:34:24.211989 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.214731 master-0 kubenswrapper[4041]: I0320 08:34:24.214566 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 08:34:24.214874 master-0 kubenswrapper[4041]: I0320 08:34:24.214833 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 08:34:24.235899 master-0 kubenswrapper[4041]: I0320 08:34:24.235849 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.236067 master-0 kubenswrapper[4041]: I0320 08:34:24.235916 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.236067 master-0 kubenswrapper[4041]: I0320 08:34:24.235934 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.236067 master-0 kubenswrapper[4041]: I0320 08:34:24.235950 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.236695 master-0 kubenswrapper[4041]: I0320 08:34:24.236658 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.238331 master-0 kubenswrapper[4041]: I0320 08:34:24.237536 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.244850 master-0 kubenswrapper[4041]: I0320 08:34:24.244780 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.253512 master-0 kubenswrapper[4041]: I0320 08:34:24.253413 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.318951 master-0 kubenswrapper[4041]: I0320 08:34:24.318862 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:34:24.337917 master-0 kubenswrapper[4041]: I0320 08:34:24.337819 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-netns\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.337917 master-0 kubenswrapper[4041]: I0320 08:34:24.337880 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-script-lib\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338025 master-0 kubenswrapper[4041]: I0320 08:34:24.337949 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-systemd-units\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338065 master-0 kubenswrapper[4041]: I0320 08:34:24.337994 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-node-log\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338065 master-0 kubenswrapper[4041]: I0320 08:34:24.338052 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-ovn-kubernetes\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338120 master-0 kubenswrapper[4041]: I0320 08:34:24.338102 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:34:24.338153 master-0 kubenswrapper[4041]: I0320 08:34:24.338133 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338195 master-0 kubenswrapper[4041]: I0320 08:34:24.338161 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338228 master-0 kubenswrapper[4041]: I0320 08:34:24.338210 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-etc-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338350 master-0 kubenswrapper[4041]: E0320 08:34:24.338312 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:24.338409 master-0 kubenswrapper[4041]: I0320 08:34:24.338374 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-env-overrides\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338409 master-0 kubenswrapper[4041]: E0320 08:34:24.338391 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:56.338369391 +0000 UTC m=+103.588714986 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:24.338482 master-0 kubenswrapper[4041]: I0320 08:34:24.338413 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-slash\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338482 master-0 kubenswrapper[4041]: I0320 08:34:24.338472 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-kubelet\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338550 master-0 kubenswrapper[4041]: I0320 08:34:24.338498 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-bin\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338550 master-0 kubenswrapper[4041]: I0320 08:34:24.338528 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-systemd\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338609 master-0 kubenswrapper[4041]: I0320 08:34:24.338552 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovn-node-metrics-cert\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338609 master-0 kubenswrapper[4041]: I0320 08:34:24.338591 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-ovn\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338671 master-0 kubenswrapper[4041]: I0320 08:34:24.338610 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-netd\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338671 master-0 kubenswrapper[4041]: I0320 08:34:24.338631 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-var-lib-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338671 master-0 kubenswrapper[4041]: I0320 08:34:24.338651 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-log-socket\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338751 master-0 kubenswrapper[4041]: I0320 08:34:24.338675 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-config\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.338751 master-0 kubenswrapper[4041]: I0320 08:34:24.338699 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ss8m\" (UniqueName: \"kubernetes.io/projected/669651d1-cadf-4a5c-bed8-6ff2107774f8-kube-api-access-6ss8m\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439252 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-node-log\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439355 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-node-log\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439403 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-ovn-kubernetes\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439429 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-systemd-units\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439467 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439511 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-systemd-units\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439535 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439562 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-etc-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439603 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-etc-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439603 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439620 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439636 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-slash\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439649 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-ovn-kubernetes\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439659 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-env-overrides\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439680 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-slash\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439681 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-kubelet\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.439794 master-0 kubenswrapper[4041]: I0320 08:34:24.439704 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-kubelet\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.439706 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-bin\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.439734 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-bin\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.439735 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-systemd\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.439755 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-systemd\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.439759 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovn-node-metrics-cert\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440164 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-var-lib-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440240 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-ovn\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440292 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-netd\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440316 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-log-socket\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440338 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ss8m\" (UniqueName: \"kubernetes.io/projected/669651d1-cadf-4a5c-bed8-6ff2107774f8-kube-api-access-6ss8m\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440364 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-config\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440384 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-script-lib\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440410 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-netns\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440451 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-netd\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440546 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-env-overrides\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.440644 master-0 kubenswrapper[4041]: I0320 08:34:24.440643 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-ovn\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.441444 master-0 kubenswrapper[4041]: I0320 08:34:24.441409 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-var-lib-openvswitch\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.441532 master-0 kubenswrapper[4041]: I0320 08:34:24.441441 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-script-lib\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.441532 master-0 kubenswrapper[4041]: I0320 08:34:24.441456 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-netns\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.441532 master-0 kubenswrapper[4041]: I0320 08:34:24.441489 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-log-socket\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.441649 master-0 kubenswrapper[4041]: I0320 08:34:24.441565 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-config\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.443224 master-0 kubenswrapper[4041]: I0320 08:34:24.443181 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovn-node-metrics-cert\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.457635 master-0 kubenswrapper[4041]: I0320 08:34:24.457593 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ss8m\" (UniqueName: \"kubernetes.io/projected/669651d1-cadf-4a5c-bed8-6ff2107774f8-kube-api-access-6ss8m\") pod \"ovnkube-node-87r4t\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:24.532372 master-0 kubenswrapper[4041]: I0320 08:34:24.532255 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:25.536009 master-0 kubenswrapper[4041]: I0320 08:34:25.535251 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:25.536009 master-0 kubenswrapper[4041]: E0320 08:34:25.535472 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:26.191563 master-0 kubenswrapper[4041]: W0320 08:34:26.191456 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod669651d1_cadf_4a5c_bed8_6ff2107774f8.slice/crio-09c3dbf98f029b599be8467f2ba68d9a6d218a6934b14ce9b4906e51874a4088 WatchSource:0}: Error finding container 09c3dbf98f029b599be8467f2ba68d9a6d218a6934b14ce9b4906e51874a4088: Status 404 returned error can't find the container with id 09c3dbf98f029b599be8467f2ba68d9a6d218a6934b14ce9b4906e51874a4088 Mar 20 08:34:26.198574 master-0 kubenswrapper[4041]: W0320 08:34:26.198528 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod210dd7f0_d1c0_407a_b89b_f11ef605e5df.slice/crio-2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2 WatchSource:0}: Error finding container 2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2: Status 404 returned error can't find the container with id 2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2 Mar 20 08:34:27.000848 master-0 kubenswrapper[4041]: I0320 08:34:27.000520 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"09c3dbf98f029b599be8467f2ba68d9a6d218a6934b14ce9b4906e51874a4088"} Mar 20 08:34:27.002975 master-0 kubenswrapper[4041]: I0320 08:34:27.002800 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxqwj" event={"ID":"7949621e-4da6-4e43-a1f3-2ef303bf6aa6","Type":"ContainerStarted","Data":"87687aad12c871b51f38e96592a82bdee6ee41cb3015da390a35f50e9ae27334"} Mar 20 08:34:27.005192 master-0 kubenswrapper[4041]: I0320 08:34:27.005002 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"a4954a5504413e2099df95d5fe0152972b5d1c0a055f8c70067df9606aba177c"} Mar 20 08:34:27.005192 master-0 kubenswrapper[4041]: I0320 08:34:27.005058 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2"} Mar 20 08:34:27.010987 master-0 kubenswrapper[4041]: I0320 08:34:27.010894 4041 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="e286a3213c5346d10ff0d6cbc953c4d1baa37806e4134a08a01aa0b21b03e73b" exitCode=0 Mar 20 08:34:27.010987 master-0 kubenswrapper[4041]: I0320 08:34:27.010938 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"e286a3213c5346d10ff0d6cbc953c4d1baa37806e4134a08a01aa0b21b03e73b"} Mar 20 08:34:27.025112 master-0 kubenswrapper[4041]: I0320 08:34:27.024945 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-pxqwj" podStartSLOduration=1.684655633 podStartE2EDuration="16.024832567s" podCreationTimestamp="2026-03-20 08:34:11 +0000 UTC" firstStartedPulling="2026-03-20 08:34:11.958036846 +0000 UTC m=+59.208382391" lastFinishedPulling="2026-03-20 08:34:26.29821378 +0000 UTC m=+73.548559325" observedRunningTime="2026-03-20 08:34:27.021788689 +0000 UTC m=+74.272134224" watchObservedRunningTime="2026-03-20 08:34:27.024832567 +0000 UTC m=+74.275178112" Mar 20 08:34:27.186179 master-0 kubenswrapper[4041]: I0320 08:34:27.186121 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-j9jjm"] Mar 20 08:34:27.186502 master-0 kubenswrapper[4041]: I0320 08:34:27.186479 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:27.186772 master-0 kubenswrapper[4041]: E0320 08:34:27.186533 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:27.271703 master-0 kubenswrapper[4041]: I0320 08:34:27.271553 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:27.372054 master-0 kubenswrapper[4041]: I0320 08:34:27.371932 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:27.383751 master-0 kubenswrapper[4041]: E0320 08:34:27.383700 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:27.383751 master-0 kubenswrapper[4041]: E0320 08:34:27.383752 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:27.383902 master-0 kubenswrapper[4041]: E0320 08:34:27.383771 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:27.383902 master-0 kubenswrapper[4041]: E0320 08:34:27.383841 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:34:27.883821639 +0000 UTC m=+75.134167144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:27.535738 master-0 kubenswrapper[4041]: I0320 08:34:27.535628 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:27.535895 master-0 kubenswrapper[4041]: E0320 08:34:27.535779 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:27.976940 master-0 kubenswrapper[4041]: I0320 08:34:27.976868 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:27.977142 master-0 kubenswrapper[4041]: E0320 08:34:27.977084 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:27.977142 master-0 kubenswrapper[4041]: E0320 08:34:27.977110 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:27.977142 master-0 kubenswrapper[4041]: E0320 08:34:27.977127 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:27.977300 master-0 kubenswrapper[4041]: E0320 08:34:27.977250 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:34:28.977179138 +0000 UTC m=+76.227524683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:28.381137 master-0 kubenswrapper[4041]: I0320 08:34:28.381021 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:28.381692 master-0 kubenswrapper[4041]: E0320 08:34:28.381197 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:28.381692 master-0 kubenswrapper[4041]: E0320 08:34:28.381289 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:34:44.381271987 +0000 UTC m=+91.631617482 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:28.535300 master-0 kubenswrapper[4041]: I0320 08:34:28.535228 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:28.535525 master-0 kubenswrapper[4041]: E0320 08:34:28.535415 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:28.987959 master-0 kubenswrapper[4041]: I0320 08:34:28.987859 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:28.988256 master-0 kubenswrapper[4041]: E0320 08:34:28.988180 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:28.988256 master-0 kubenswrapper[4041]: E0320 08:34:28.988235 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:28.988256 master-0 kubenswrapper[4041]: E0320 08:34:28.988257 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:28.988502 master-0 kubenswrapper[4041]: E0320 08:34:28.988381 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:34:30.988349871 +0000 UTC m=+78.238695416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: I0320 08:34:31.005580 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: E0320 08:34:31.005763 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: E0320 08:34:31.005788 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: E0320 08:34:31.005804 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: E0320 08:34:31.005865 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:34:35.005845482 +0000 UTC m=+82.256190987 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: I0320 08:34:31.005968 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:31.007443 master-0 kubenswrapper[4041]: E0320 08:34:31.006061 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:31.016193 master-0 kubenswrapper[4041]: I0320 08:34:31.015292 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:31.016193 master-0 kubenswrapper[4041]: E0320 08:34:31.015518 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:32.535081 master-0 kubenswrapper[4041]: I0320 08:34:32.534980 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:32.535081 master-0 kubenswrapper[4041]: I0320 08:34:32.535058 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:32.535979 master-0 kubenswrapper[4041]: E0320 08:34:32.535134 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:32.535979 master-0 kubenswrapper[4041]: E0320 08:34:32.535185 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:34.123684 master-0 kubenswrapper[4041]: W0320 08:34:34.123633 4041 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 20 08:34:34.125406 master-0 kubenswrapper[4041]: I0320 08:34:34.125341 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 20 08:34:34.535111 master-0 kubenswrapper[4041]: I0320 08:34:34.534972 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:34.535290 master-0 kubenswrapper[4041]: I0320 08:34:34.534992 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:34.535290 master-0 kubenswrapper[4041]: E0320 08:34:34.535194 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:34.535412 master-0 kubenswrapper[4041]: E0320 08:34:34.535328 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:35.052878 master-0 kubenswrapper[4041]: I0320 08:34:35.052823 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:35.053345 master-0 kubenswrapper[4041]: E0320 08:34:35.052995 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:35.053345 master-0 kubenswrapper[4041]: E0320 08:34:35.053012 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:35.053345 master-0 kubenswrapper[4041]: E0320 08:34:35.053321 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:35.053448 master-0 kubenswrapper[4041]: E0320 08:34:35.053376 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:34:43.053363228 +0000 UTC m=+90.303708733 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:35.527350 master-0 kubenswrapper[4041]: I0320 08:34:35.525111 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-dq29v"] Mar 20 08:34:35.527350 master-0 kubenswrapper[4041]: I0320 08:34:35.525807 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.528437 master-0 kubenswrapper[4041]: I0320 08:34:35.528393 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 08:34:35.528541 master-0 kubenswrapper[4041]: I0320 08:34:35.528516 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 08:34:35.528620 master-0 kubenswrapper[4041]: I0320 08:34:35.528562 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 08:34:35.528762 master-0 kubenswrapper[4041]: I0320 08:34:35.528714 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 08:34:35.529103 master-0 kubenswrapper[4041]: I0320 08:34:35.529061 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 08:34:35.557122 master-0 kubenswrapper[4041]: I0320 08:34:35.557070 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.557122 master-0 kubenswrapper[4041]: I0320 08:34:35.557110 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.557247 master-0 kubenswrapper[4041]: I0320 08:34:35.557129 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.557247 master-0 kubenswrapper[4041]: I0320 08:34:35.557145 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.658393 master-0 kubenswrapper[4041]: I0320 08:34:35.658171 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.658393 master-0 kubenswrapper[4041]: I0320 08:34:35.658244 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.658393 master-0 kubenswrapper[4041]: I0320 08:34:35.658308 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.658997 master-0 kubenswrapper[4041]: I0320 08:34:35.658910 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.659594 master-0 kubenswrapper[4041]: I0320 08:34:35.659514 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.660567 master-0 kubenswrapper[4041]: I0320 08:34:35.660510 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:35.663891 master-0 kubenswrapper[4041]: I0320 08:34:35.663805 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:36.236302 master-0 kubenswrapper[4041]: I0320 08:34:36.236150 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=3.236123272 podStartE2EDuration="3.236123272s" podCreationTimestamp="2026-03-20 08:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:36.229193303 +0000 UTC m=+83.479538838" watchObservedRunningTime="2026-03-20 08:34:36.236123272 +0000 UTC m=+83.486468787" Mar 20 08:34:36.243436 master-0 kubenswrapper[4041]: I0320 08:34:36.243357 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:36.441190 master-0 kubenswrapper[4041]: I0320 08:34:36.440652 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:34:36.534853 master-0 kubenswrapper[4041]: I0320 08:34:36.534737 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:36.535463 master-0 kubenswrapper[4041]: I0320 08:34:36.534862 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:36.535463 master-0 kubenswrapper[4041]: E0320 08:34:36.534940 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:36.535463 master-0 kubenswrapper[4041]: E0320 08:34:36.535043 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:37.019866 master-0 kubenswrapper[4041]: I0320 08:34:37.019305 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306"} Mar 20 08:34:37.553763 master-0 kubenswrapper[4041]: I0320 08:34:37.553619 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 20 08:34:38.024695 master-0 kubenswrapper[4041]: I0320 08:34:38.024639 4041 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="49a024c7c79250dd61c634f6e633e0edd247a3c463686f54208b638a2fd19ebb" exitCode=0 Mar 20 08:34:38.024919 master-0 kubenswrapper[4041]: I0320 08:34:38.024730 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"49a024c7c79250dd61c634f6e633e0edd247a3c463686f54208b638a2fd19ebb"} Mar 20 08:34:38.056521 master-0 kubenswrapper[4041]: I0320 08:34:38.056380 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=1.056347857 podStartE2EDuration="1.056347857s" podCreationTimestamp="2026-03-20 08:34:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:38.055843894 +0000 UTC m=+85.306189419" watchObservedRunningTime="2026-03-20 08:34:38.056347857 +0000 UTC m=+85.306693382" Mar 20 08:34:38.535218 master-0 kubenswrapper[4041]: I0320 08:34:38.535160 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:38.535451 master-0 kubenswrapper[4041]: I0320 08:34:38.535227 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:38.535451 master-0 kubenswrapper[4041]: E0320 08:34:38.535332 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:38.535451 master-0 kubenswrapper[4041]: E0320 08:34:38.535420 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:40.035126 master-0 kubenswrapper[4041]: I0320 08:34:40.035031 4041 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="633e246d0eb69524c4e825553d8b2a17d7166e97b618f96a41148d7625aa5ed0" exitCode=0 Mar 20 08:34:40.035126 master-0 kubenswrapper[4041]: I0320 08:34:40.035101 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"633e246d0eb69524c4e825553d8b2a17d7166e97b618f96a41148d7625aa5ed0"} Mar 20 08:34:40.535634 master-0 kubenswrapper[4041]: I0320 08:34:40.535563 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:40.535634 master-0 kubenswrapper[4041]: I0320 08:34:40.535565 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:40.535963 master-0 kubenswrapper[4041]: E0320 08:34:40.535698 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:40.535963 master-0 kubenswrapper[4041]: E0320 08:34:40.535815 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:42.535531 master-0 kubenswrapper[4041]: I0320 08:34:42.535474 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:42.535531 master-0 kubenswrapper[4041]: I0320 08:34:42.535492 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:42.537185 master-0 kubenswrapper[4041]: E0320 08:34:42.535629 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:42.537185 master-0 kubenswrapper[4041]: E0320 08:34:42.535995 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:43.123036 master-0 kubenswrapper[4041]: I0320 08:34:43.122931 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:43.123431 master-0 kubenswrapper[4041]: E0320 08:34:43.123153 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:43.123431 master-0 kubenswrapper[4041]: E0320 08:34:43.123194 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:43.123431 master-0 kubenswrapper[4041]: E0320 08:34:43.123211 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:43.123431 master-0 kubenswrapper[4041]: E0320 08:34:43.123304 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:34:59.123283979 +0000 UTC m=+106.373629494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:44.434764 master-0 kubenswrapper[4041]: I0320 08:34:44.434641 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:44.435798 master-0 kubenswrapper[4041]: E0320 08:34:44.434825 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:44.435798 master-0 kubenswrapper[4041]: E0320 08:34:44.434921 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:16.434899479 +0000 UTC m=+123.685244974 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 08:34:44.535858 master-0 kubenswrapper[4041]: I0320 08:34:44.534801 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:44.535858 master-0 kubenswrapper[4041]: I0320 08:34:44.534845 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:44.535858 master-0 kubenswrapper[4041]: E0320 08:34:44.534905 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:44.535858 master-0 kubenswrapper[4041]: E0320 08:34:44.535060 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:46.535285 master-0 kubenswrapper[4041]: I0320 08:34:46.535192 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:46.536219 master-0 kubenswrapper[4041]: E0320 08:34:46.535429 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:46.536219 master-0 kubenswrapper[4041]: I0320 08:34:46.535533 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:46.536219 master-0 kubenswrapper[4041]: E0320 08:34:46.535636 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:47.554756 master-0 kubenswrapper[4041]: I0320 08:34:47.554449 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 20 08:34:48.058468 master-0 kubenswrapper[4041]: I0320 08:34:48.058339 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" exitCode=0 Mar 20 08:34:48.058468 master-0 kubenswrapper[4041]: I0320 08:34:48.058422 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} Mar 20 08:34:48.060946 master-0 kubenswrapper[4041]: I0320 08:34:48.060781 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"536065a4d8759d271003b36465db4bd4965a5a320e8baa9df238dec6c8adc25f"} Mar 20 08:34:48.120736 master-0 kubenswrapper[4041]: I0320 08:34:48.120114 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.120090977 podStartE2EDuration="1.120090977s" podCreationTimestamp="2026-03-20 08:34:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:48.090648485 +0000 UTC m=+95.340994010" watchObservedRunningTime="2026-03-20 08:34:48.120090977 +0000 UTC m=+95.370436482" Mar 20 08:34:48.163473 master-0 kubenswrapper[4041]: I0320 08:34:48.163365 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" podStartSLOduration=4.282914938 podStartE2EDuration="25.163330776s" podCreationTimestamp="2026-03-20 08:34:23 +0000 UTC" firstStartedPulling="2026-03-20 08:34:26.46009076 +0000 UTC m=+73.710436275" lastFinishedPulling="2026-03-20 08:34:47.340506608 +0000 UTC m=+94.590852113" observedRunningTime="2026-03-20 08:34:48.163248464 +0000 UTC m=+95.413593989" watchObservedRunningTime="2026-03-20 08:34:48.163330776 +0000 UTC m=+95.413676281" Mar 20 08:34:48.535734 master-0 kubenswrapper[4041]: I0320 08:34:48.535400 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:48.535990 master-0 kubenswrapper[4041]: E0320 08:34:48.535812 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:48.535990 master-0 kubenswrapper[4041]: I0320 08:34:48.535504 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:48.536062 master-0 kubenswrapper[4041]: E0320 08:34:48.536009 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:49.069752 master-0 kubenswrapper[4041]: I0320 08:34:49.069676 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"b286a4895dfb5ad25ff94bbecb22ea3a5b89ba604a59910e8726e22ec7afd75a"} Mar 20 08:34:49.075899 master-0 kubenswrapper[4041]: I0320 08:34:49.074068 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} Mar 20 08:34:49.075899 master-0 kubenswrapper[4041]: I0320 08:34:49.074145 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} Mar 20 08:34:49.075899 master-0 kubenswrapper[4041]: I0320 08:34:49.074161 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} Mar 20 08:34:49.075899 master-0 kubenswrapper[4041]: I0320 08:34:49.074174 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} Mar 20 08:34:49.075899 master-0 kubenswrapper[4041]: I0320 08:34:49.074191 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} Mar 20 08:34:49.075899 master-0 kubenswrapper[4041]: I0320 08:34:49.074204 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} Mar 20 08:34:50.079168 master-0 kubenswrapper[4041]: I0320 08:34:50.079053 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"c5f00c0d77211fa7340df0b5c9e4c67e0a0eeb68e81ac9de5effbf2d875c406e"} Mar 20 08:34:50.095901 master-0 kubenswrapper[4041]: I0320 08:34:50.095738 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-dq29v" podStartSLOduration=4.013719184 podStartE2EDuration="16.095655562s" podCreationTimestamp="2026-03-20 08:34:34 +0000 UTC" firstStartedPulling="2026-03-20 08:34:36.818747673 +0000 UTC m=+84.069093178" lastFinishedPulling="2026-03-20 08:34:48.900684051 +0000 UTC m=+96.151029556" observedRunningTime="2026-03-20 08:34:50.09443912 +0000 UTC m=+97.344784635" watchObservedRunningTime="2026-03-20 08:34:50.095655562 +0000 UTC m=+97.346001097" Mar 20 08:34:50.535645 master-0 kubenswrapper[4041]: I0320 08:34:50.535446 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:50.535645 master-0 kubenswrapper[4041]: I0320 08:34:50.535501 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:50.536610 master-0 kubenswrapper[4041]: E0320 08:34:50.535798 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:50.536610 master-0 kubenswrapper[4041]: E0320 08:34:50.536036 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:52.090370 master-0 kubenswrapper[4041]: I0320 08:34:52.090011 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} Mar 20 08:34:52.534978 master-0 kubenswrapper[4041]: I0320 08:34:52.534920 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:52.535208 master-0 kubenswrapper[4041]: E0320 08:34:52.535102 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:52.535857 master-0 kubenswrapper[4041]: I0320 08:34:52.535411 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:52.535857 master-0 kubenswrapper[4041]: E0320 08:34:52.535476 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:54.107444 master-0 kubenswrapper[4041]: I0320 08:34:54.107391 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerStarted","Data":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} Mar 20 08:34:54.193606 master-0 kubenswrapper[4041]: I0320 08:34:54.193386 4041 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87r4t"] Mar 20 08:34:54.535230 master-0 kubenswrapper[4041]: I0320 08:34:54.534806 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:54.535230 master-0 kubenswrapper[4041]: E0320 08:34:54.535203 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:54.535541 master-0 kubenswrapper[4041]: I0320 08:34:54.534812 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:54.535605 master-0 kubenswrapper[4041]: E0320 08:34:54.535548 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:54.544931 master-0 kubenswrapper[4041]: I0320 08:34:54.544833 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 20 08:34:55.115020 master-0 kubenswrapper[4041]: I0320 08:34:55.114954 4041 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="1ad464d19cae2361db03cbce68a3a46d3a3a7e57495ff1c59b795128f430f3c3" exitCode=0 Mar 20 08:34:55.116537 master-0 kubenswrapper[4041]: I0320 08:34:55.115088 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"1ad464d19cae2361db03cbce68a3a46d3a3a7e57495ff1c59b795128f430f3c3"} Mar 20 08:34:55.116537 master-0 kubenswrapper[4041]: I0320 08:34:55.115642 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:55.135441 master-0 kubenswrapper[4041]: I0320 08:34:55.135339 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=1.135249078 podStartE2EDuration="1.135249078s" podCreationTimestamp="2026-03-20 08:34:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:55.134709724 +0000 UTC m=+102.385055309" watchObservedRunningTime="2026-03-20 08:34:55.135249078 +0000 UTC m=+102.385594613" Mar 20 08:34:55.206320 master-0 kubenswrapper[4041]: I0320 08:34:55.206196 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podStartSLOduration=10.094298634 podStartE2EDuration="31.206166453s" podCreationTimestamp="2026-03-20 08:34:24 +0000 UTC" firstStartedPulling="2026-03-20 08:34:26.197308208 +0000 UTC m=+73.447653753" lastFinishedPulling="2026-03-20 08:34:47.309176047 +0000 UTC m=+94.559521572" observedRunningTime="2026-03-20 08:34:55.205956078 +0000 UTC m=+102.456301623" watchObservedRunningTime="2026-03-20 08:34:55.206166453 +0000 UTC m=+102.456511998" Mar 20 08:34:56.130064 master-0 kubenswrapper[4041]: I0320 08:34:56.129976 4041 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="66935bc88a172084ce89ee3474a8817878b895f87e27bbd9f994bbea54a28d58" exitCode=0 Mar 20 08:34:56.131051 master-0 kubenswrapper[4041]: I0320 08:34:56.130090 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"66935bc88a172084ce89ee3474a8817878b895f87e27bbd9f994bbea54a28d58"} Mar 20 08:34:56.131051 master-0 kubenswrapper[4041]: I0320 08:34:56.130734 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-controller" containerID="cri-o://5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" gracePeriod=30 Mar 20 08:34:56.131249 master-0 kubenswrapper[4041]: I0320 08:34:56.131053 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="northd" containerID="cri-o://4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" gracePeriod=30 Mar 20 08:34:56.131249 master-0 kubenswrapper[4041]: I0320 08:34:56.131022 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="nbdb" containerID="cri-o://6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" gracePeriod=30 Mar 20 08:34:56.131249 master-0 kubenswrapper[4041]: I0320 08:34:56.131174 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" gracePeriod=30 Mar 20 08:34:56.131249 master-0 kubenswrapper[4041]: I0320 08:34:56.131241 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-node" containerID="cri-o://ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" gracePeriod=30 Mar 20 08:34:56.131608 master-0 kubenswrapper[4041]: I0320 08:34:56.131406 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-acl-logging" containerID="cri-o://027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" gracePeriod=30 Mar 20 08:34:56.131608 master-0 kubenswrapper[4041]: I0320 08:34:56.131546 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:56.131608 master-0 kubenswrapper[4041]: I0320 08:34:56.131576 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:56.131889 master-0 kubenswrapper[4041]: I0320 08:34:56.131626 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="sbdb" containerID="cri-o://688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" gracePeriod=30 Mar 20 08:34:56.139564 master-0 kubenswrapper[4041]: E0320 08:34:56.139490 4041 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 20 08:34:56.141588 master-0 kubenswrapper[4041]: E0320 08:34:56.141451 4041 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 20 08:34:56.145925 master-0 kubenswrapper[4041]: E0320 08:34:56.145752 4041 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 20 08:34:56.145925 master-0 kubenswrapper[4041]: E0320 08:34:56.145893 4041 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="sbdb" Mar 20 08:34:56.155905 master-0 kubenswrapper[4041]: E0320 08:34:56.155578 4041 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 20 08:34:56.157325 master-0 kubenswrapper[4041]: E0320 08:34:56.157225 4041 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 20 08:34:56.158794 master-0 kubenswrapper[4041]: E0320 08:34:56.158740 4041 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"nb\"\n"] Mar 20 08:34:56.158867 master-0 kubenswrapper[4041]: E0320 08:34:56.158794 4041 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="nbdb" Mar 20 08:34:56.177039 master-0 kubenswrapper[4041]: I0320 08:34:56.176981 4041 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovnkube-controller" containerID="cri-o://2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" gracePeriod=30 Mar 20 08:34:56.349021 master-0 kubenswrapper[4041]: I0320 08:34:56.348971 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:34:56.349187 master-0 kubenswrapper[4041]: E0320 08:34:56.349148 4041 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:56.349249 master-0 kubenswrapper[4041]: E0320 08:34:56.349231 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:36:00.34920924 +0000 UTC m=+167.599554745 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:34:56.433379 master-0 kubenswrapper[4041]: I0320 08:34:56.433130 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/ovnkube-controller/0.log" Mar 20 08:34:56.434954 master-0 kubenswrapper[4041]: I0320 08:34:56.434917 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/kube-rbac-proxy-ovn-metrics/0.log" Mar 20 08:34:56.435519 master-0 kubenswrapper[4041]: I0320 08:34:56.435485 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/kube-rbac-proxy-node/0.log" Mar 20 08:34:56.436092 master-0 kubenswrapper[4041]: I0320 08:34:56.436067 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/ovn-acl-logging/0.log" Mar 20 08:34:56.436735 master-0 kubenswrapper[4041]: I0320 08:34:56.436695 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/ovn-controller/0.log" Mar 20 08:34:56.437393 master-0 kubenswrapper[4041]: I0320 08:34:56.437355 4041 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:56.449858 master-0 kubenswrapper[4041]: I0320 08:34:56.449814 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-bin\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.449858 master-0 kubenswrapper[4041]: I0320 08:34:56.449844 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-var-lib-openvswitch\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.449858 master-0 kubenswrapper[4041]: I0320 08:34:56.449866 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ss8m\" (UniqueName: \"kubernetes.io/projected/669651d1-cadf-4a5c-bed8-6ff2107774f8-kube-api-access-6ss8m\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449882 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-node-log\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449897 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-ovn\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449916 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-script-lib\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449939 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-systemd-units\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449956 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-openvswitch\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449972 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-etc-openvswitch\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449901 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449945 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-node-log" (OuterVolumeSpecName: "node-log") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449952 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.450095 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.449989 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-env-overrides\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.450186 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-slash\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.450305 master-0 kubenswrapper[4041]: I0320 08:34:56.450227 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450336 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450343 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-config\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450424 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-netns\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450364 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-slash" (OuterVolumeSpecName: "host-slash") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450383 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450387 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450471 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-systemd\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450552 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-netd\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450579 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-log-socket\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450600 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-ovn-kubernetes\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450621 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-var-lib-cni-networks-ovn-kubernetes\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450642 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-kubelet\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450669 4041 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovn-node-metrics-cert\") pod \"669651d1-cadf-4a5c-bed8-6ff2107774f8\" (UID: \"669651d1-cadf-4a5c-bed8-6ff2107774f8\") " Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450809 4041 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450822 4041 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450835 4041 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450846 4041 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450858 4041 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.451546 master-0 kubenswrapper[4041]: I0320 08:34:56.450861 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450844 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450903 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450870 4041 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450947 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450969 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-log-socket" (OuterVolumeSpecName: "log-socket") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450987 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.450980 4041 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.451013 4041 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.451031 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.451033 4041 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-node-log\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.452708 master-0 kubenswrapper[4041]: I0320 08:34:56.451069 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.456771 master-0 kubenswrapper[4041]: I0320 08:34:56.456416 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/669651d1-cadf-4a5c-bed8-6ff2107774f8-kube-api-access-6ss8m" (OuterVolumeSpecName: "kube-api-access-6ss8m") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "kube-api-access-6ss8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:34:56.457552 master-0 kubenswrapper[4041]: I0320 08:34:56.457465 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:34:56.463463 master-0 kubenswrapper[4041]: I0320 08:34:56.460046 4041 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "669651d1-cadf-4a5c-bed8-6ff2107774f8" (UID: "669651d1-cadf-4a5c-bed8-6ff2107774f8"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500688 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-bvndl"] Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500800 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kubecfg-setup" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500816 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kubecfg-setup" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500826 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500835 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500844 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovnkube-controller" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500852 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovnkube-controller" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500861 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-acl-logging" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500871 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-acl-logging" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500880 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="nbdb" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500888 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="nbdb" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500898 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-controller" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500906 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-controller" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500914 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-node" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500922 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-node" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500932 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="sbdb" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500940 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="sbdb" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: E0320 08:34:56.500949 4041 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="northd" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.500957 4041 state_mem.go:107] "Deleted CPUSet assignment" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="northd" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501000 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="nbdb" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501009 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="sbdb" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501019 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-node" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501027 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501035 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovnkube-controller" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501043 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-controller" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501052 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="ovn-acl-logging" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501060 4041 memory_manager.go:354] "RemoveStaleState removing state" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerName="northd" Mar 20 08:34:56.501796 master-0 kubenswrapper[4041]: I0320 08:34:56.501828 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.535100 master-0 kubenswrapper[4041]: I0320 08:34:56.534827 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:56.535100 master-0 kubenswrapper[4041]: E0320 08:34:56.534997 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:56.536626 master-0 kubenswrapper[4041]: I0320 08:34:56.536544 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:56.536864 master-0 kubenswrapper[4041]: E0320 08:34:56.536802 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:56.552381 master-0 kubenswrapper[4041]: I0320 08:34:56.552283 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552381 master-0 kubenswrapper[4041]: I0320 08:34:56.552371 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552797 master-0 kubenswrapper[4041]: I0320 08:34:56.552431 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552797 master-0 kubenswrapper[4041]: I0320 08:34:56.552475 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552797 master-0 kubenswrapper[4041]: I0320 08:34:56.552630 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552797 master-0 kubenswrapper[4041]: I0320 08:34:56.552701 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552797 master-0 kubenswrapper[4041]: I0320 08:34:56.552738 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.552797 master-0 kubenswrapper[4041]: I0320 08:34:56.552771 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.552870 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.552906 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.552940 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.552987 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.553016 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.553055 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.553158 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.553341 master-0 kubenswrapper[4041]: I0320 08:34:56.553222 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553375 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553433 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553504 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553566 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553662 4041 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553696 4041 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553728 4041 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553758 4041 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553789 4041 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553817 4041 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553846 4041 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ss8m\" (UniqueName: \"kubernetes.io/projected/669651d1-cadf-4a5c-bed8-6ff2107774f8-kube-api-access-6ss8m\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553877 4041 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553910 4041 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/669651d1-cadf-4a5c-bed8-6ff2107774f8-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553936 4041 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.554073 master-0 kubenswrapper[4041]: I0320 08:34:56.553961 4041 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/669651d1-cadf-4a5c-bed8-6ff2107774f8-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 20 08:34:56.654647 master-0 kubenswrapper[4041]: I0320 08:34:56.654470 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.654647 master-0 kubenswrapper[4041]: I0320 08:34:56.654536 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.654647 master-0 kubenswrapper[4041]: I0320 08:34:56.654572 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655028 master-0 kubenswrapper[4041]: I0320 08:34:56.654696 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655028 master-0 kubenswrapper[4041]: I0320 08:34:56.654750 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655028 master-0 kubenswrapper[4041]: I0320 08:34:56.654786 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655028 master-0 kubenswrapper[4041]: I0320 08:34:56.654848 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655028 master-0 kubenswrapper[4041]: I0320 08:34:56.654907 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655028 master-0 kubenswrapper[4041]: I0320 08:34:56.654968 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655045 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.654922 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655088 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655175 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655090 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655252 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655339 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655384 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.655440 master-0 kubenswrapper[4041]: I0320 08:34:56.655434 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655480 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655529 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655525 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655632 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655680 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655694 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655735 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655705 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655750 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655856 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655909 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655949 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655982 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.655984 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.656060 master-0 kubenswrapper[4041]: I0320 08:34:56.656056 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.657249 master-0 kubenswrapper[4041]: I0320 08:34:56.656132 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.657249 master-0 kubenswrapper[4041]: I0320 08:34:56.656356 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.657249 master-0 kubenswrapper[4041]: I0320 08:34:56.656800 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.657249 master-0 kubenswrapper[4041]: I0320 08:34:56.657161 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.657793 master-0 kubenswrapper[4041]: I0320 08:34:56.657432 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.664109 master-0 kubenswrapper[4041]: I0320 08:34:56.664037 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.685025 master-0 kubenswrapper[4041]: I0320 08:34:56.684944 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:56.824574 master-0 kubenswrapper[4041]: I0320 08:34:56.824509 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:34:57.145412 master-0 kubenswrapper[4041]: I0320 08:34:57.145298 4041 generic.go:334] "Generic (PLEG): container finished" podID="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" containerID="c61822f24caad65a896a136b258da1c07b65503ea37e7992a32f53bc007f40ea" exitCode=0 Mar 20 08:34:57.145412 master-0 kubenswrapper[4041]: I0320 08:34:57.145341 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerDied","Data":"c61822f24caad65a896a136b258da1c07b65503ea37e7992a32f53bc007f40ea"} Mar 20 08:34:57.149799 master-0 kubenswrapper[4041]: I0320 08:34:57.145446 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03"} Mar 20 08:34:57.159169 master-0 kubenswrapper[4041]: I0320 08:34:57.159081 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerStarted","Data":"5283e40c5cc77cdb39d96a842e1d4a3b90fa78d7cd6f57c6b779fa0e23ddfd45"} Mar 20 08:34:57.163324 master-0 kubenswrapper[4041]: I0320 08:34:57.163237 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/ovnkube-controller/0.log" Mar 20 08:34:57.168538 master-0 kubenswrapper[4041]: I0320 08:34:57.166723 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/kube-rbac-proxy-ovn-metrics/0.log" Mar 20 08:34:57.169885 master-0 kubenswrapper[4041]: I0320 08:34:57.169832 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/kube-rbac-proxy-node/0.log" Mar 20 08:34:57.170829 master-0 kubenswrapper[4041]: I0320 08:34:57.170771 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/ovn-acl-logging/0.log" Mar 20 08:34:57.171792 master-0 kubenswrapper[4041]: I0320 08:34:57.171737 4041 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-87r4t_669651d1-cadf-4a5c-bed8-6ff2107774f8/ovn-controller/0.log" Mar 20 08:34:57.172531 master-0 kubenswrapper[4041]: I0320 08:34:57.172418 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" exitCode=2 Mar 20 08:34:57.172531 master-0 kubenswrapper[4041]: I0320 08:34:57.172480 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" exitCode=0 Mar 20 08:34:57.172531 master-0 kubenswrapper[4041]: I0320 08:34:57.172519 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" exitCode=0 Mar 20 08:34:57.172709 master-0 kubenswrapper[4041]: I0320 08:34:57.172542 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" exitCode=0 Mar 20 08:34:57.172709 master-0 kubenswrapper[4041]: I0320 08:34:57.172564 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" exitCode=143 Mar 20 08:34:57.172709 master-0 kubenswrapper[4041]: I0320 08:34:57.172483 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} Mar 20 08:34:57.172709 master-0 kubenswrapper[4041]: I0320 08:34:57.172621 4041 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" Mar 20 08:34:57.172709 master-0 kubenswrapper[4041]: I0320 08:34:57.172651 4041 scope.go:117] "RemoveContainer" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.172709 master-0 kubenswrapper[4041]: I0320 08:34:57.172586 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" exitCode=143 Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.172732 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" exitCode=143 Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.172755 4041 generic.go:334] "Generic (PLEG): container finished" podID="669651d1-cadf-4a5c-bed8-6ff2107774f8" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" exitCode=143 Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.172628 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.172919 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.172960 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.172991 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} Mar 20 08:34:57.173077 master-0 kubenswrapper[4041]: I0320 08:34:57.173017 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} Mar 20 08:34:57.173485 master-0 kubenswrapper[4041]: I0320 08:34:57.173044 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} Mar 20 08:34:57.173485 master-0 kubenswrapper[4041]: I0320 08:34:57.173387 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} Mar 20 08:34:57.173485 master-0 kubenswrapper[4041]: I0320 08:34:57.173410 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} Mar 20 08:34:57.173485 master-0 kubenswrapper[4041]: I0320 08:34:57.173435 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} Mar 20 08:34:57.173485 master-0 kubenswrapper[4041]: I0320 08:34:57.173461 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} Mar 20 08:34:57.173485 master-0 kubenswrapper[4041]: I0320 08:34:57.173482 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173502 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173518 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173534 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173549 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173565 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173581 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173597 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173618 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173644 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173662 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173678 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173693 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173708 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173723 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173739 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173755 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173770 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173792 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-87r4t" event={"ID":"669651d1-cadf-4a5c-bed8-6ff2107774f8","Type":"ContainerDied","Data":"09c3dbf98f029b599be8467f2ba68d9a6d218a6934b14ce9b4906e51874a4088"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173816 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173837 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173857 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} Mar 20 08:34:57.173829 master-0 kubenswrapper[4041]: I0320 08:34:57.173875 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} Mar 20 08:34:57.175016 master-0 kubenswrapper[4041]: I0320 08:34:57.173892 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} Mar 20 08:34:57.175016 master-0 kubenswrapper[4041]: I0320 08:34:57.173910 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} Mar 20 08:34:57.175016 master-0 kubenswrapper[4041]: I0320 08:34:57.173927 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} Mar 20 08:34:57.175016 master-0 kubenswrapper[4041]: I0320 08:34:57.173942 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} Mar 20 08:34:57.175016 master-0 kubenswrapper[4041]: I0320 08:34:57.173957 4041 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} Mar 20 08:34:57.205281 master-0 kubenswrapper[4041]: I0320 08:34:57.205209 4041 scope.go:117] "RemoveContainer" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.216797 master-0 kubenswrapper[4041]: I0320 08:34:57.216694 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" podStartSLOduration=4.341469426 podStartE2EDuration="46.216668854s" podCreationTimestamp="2026-03-20 08:34:11 +0000 UTC" firstStartedPulling="2026-03-20 08:34:12.153392754 +0000 UTC m=+59.403738299" lastFinishedPulling="2026-03-20 08:34:54.028592202 +0000 UTC m=+101.278937727" observedRunningTime="2026-03-20 08:34:57.214440746 +0000 UTC m=+104.464786331" watchObservedRunningTime="2026-03-20 08:34:57.216668854 +0000 UTC m=+104.467014369" Mar 20 08:34:57.233721 master-0 kubenswrapper[4041]: I0320 08:34:57.233670 4041 scope.go:117] "RemoveContainer" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.258493 master-0 kubenswrapper[4041]: I0320 08:34:57.258393 4041 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87r4t"] Mar 20 08:34:57.258711 master-0 kubenswrapper[4041]: I0320 08:34:57.258680 4041 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-87r4t"] Mar 20 08:34:57.262605 master-0 kubenswrapper[4041]: I0320 08:34:57.262550 4041 scope.go:117] "RemoveContainer" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.275553 master-0 kubenswrapper[4041]: I0320 08:34:57.275505 4041 scope.go:117] "RemoveContainer" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.296331 master-0 kubenswrapper[4041]: I0320 08:34:57.296252 4041 scope.go:117] "RemoveContainer" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.316483 master-0 kubenswrapper[4041]: I0320 08:34:57.316441 4041 scope.go:117] "RemoveContainer" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" Mar 20 08:34:57.331703 master-0 kubenswrapper[4041]: I0320 08:34:57.331668 4041 scope.go:117] "RemoveContainer" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" Mar 20 08:34:57.351495 master-0 kubenswrapper[4041]: I0320 08:34:57.351168 4041 scope.go:117] "RemoveContainer" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" Mar 20 08:34:57.379322 master-0 kubenswrapper[4041]: I0320 08:34:57.379278 4041 scope.go:117] "RemoveContainer" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.379683 master-0 kubenswrapper[4041]: E0320 08:34:57.379649 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": container with ID starting with 2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25 not found: ID does not exist" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.379753 master-0 kubenswrapper[4041]: I0320 08:34:57.379684 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} err="failed to get container status \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": rpc error: code = NotFound desc = could not find container \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": container with ID starting with 2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25 not found: ID does not exist" Mar 20 08:34:57.379753 master-0 kubenswrapper[4041]: I0320 08:34:57.379707 4041 scope.go:117] "RemoveContainer" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.380070 master-0 kubenswrapper[4041]: E0320 08:34:57.380037 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": container with ID starting with 688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7 not found: ID does not exist" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.380170 master-0 kubenswrapper[4041]: I0320 08:34:57.380069 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} err="failed to get container status \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": rpc error: code = NotFound desc = could not find container \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": container with ID starting with 688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7 not found: ID does not exist" Mar 20 08:34:57.380170 master-0 kubenswrapper[4041]: I0320 08:34:57.380090 4041 scope.go:117] "RemoveContainer" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.380491 master-0 kubenswrapper[4041]: E0320 08:34:57.380462 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": container with ID starting with 6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19 not found: ID does not exist" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.380491 master-0 kubenswrapper[4041]: I0320 08:34:57.380484 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} err="failed to get container status \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": rpc error: code = NotFound desc = could not find container \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": container with ID starting with 6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19 not found: ID does not exist" Mar 20 08:34:57.380604 master-0 kubenswrapper[4041]: I0320 08:34:57.380498 4041 scope.go:117] "RemoveContainer" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.380957 master-0 kubenswrapper[4041]: E0320 08:34:57.380911 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": container with ID starting with 4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506 not found: ID does not exist" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.381023 master-0 kubenswrapper[4041]: I0320 08:34:57.380980 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} err="failed to get container status \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": rpc error: code = NotFound desc = could not find container \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": container with ID starting with 4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506 not found: ID does not exist" Mar 20 08:34:57.381064 master-0 kubenswrapper[4041]: I0320 08:34:57.381034 4041 scope.go:117] "RemoveContainer" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.381597 master-0 kubenswrapper[4041]: E0320 08:34:57.381570 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": container with ID starting with 1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b not found: ID does not exist" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.381647 master-0 kubenswrapper[4041]: I0320 08:34:57.381603 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} err="failed to get container status \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": rpc error: code = NotFound desc = could not find container \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": container with ID starting with 1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b not found: ID does not exist" Mar 20 08:34:57.381647 master-0 kubenswrapper[4041]: I0320 08:34:57.381625 4041 scope.go:117] "RemoveContainer" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.381973 master-0 kubenswrapper[4041]: E0320 08:34:57.381949 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": container with ID starting with ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6 not found: ID does not exist" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.382019 master-0 kubenswrapper[4041]: I0320 08:34:57.381979 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} err="failed to get container status \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": rpc error: code = NotFound desc = could not find container \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": container with ID starting with ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6 not found: ID does not exist" Mar 20 08:34:57.382019 master-0 kubenswrapper[4041]: I0320 08:34:57.382006 4041 scope.go:117] "RemoveContainer" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" Mar 20 08:34:57.382293 master-0 kubenswrapper[4041]: E0320 08:34:57.382240 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": container with ID starting with 027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79 not found: ID does not exist" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" Mar 20 08:34:57.382293 master-0 kubenswrapper[4041]: I0320 08:34:57.382269 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} err="failed to get container status \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": rpc error: code = NotFound desc = could not find container \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": container with ID starting with 027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79 not found: ID does not exist" Mar 20 08:34:57.382293 master-0 kubenswrapper[4041]: I0320 08:34:57.382283 4041 scope.go:117] "RemoveContainer" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" Mar 20 08:34:57.382695 master-0 kubenswrapper[4041]: E0320 08:34:57.382648 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": container with ID starting with 5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f not found: ID does not exist" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" Mar 20 08:34:57.382746 master-0 kubenswrapper[4041]: I0320 08:34:57.382687 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} err="failed to get container status \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": rpc error: code = NotFound desc = could not find container \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": container with ID starting with 5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f not found: ID does not exist" Mar 20 08:34:57.382746 master-0 kubenswrapper[4041]: I0320 08:34:57.382710 4041 scope.go:117] "RemoveContainer" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" Mar 20 08:34:57.383051 master-0 kubenswrapper[4041]: E0320 08:34:57.383022 4041 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": container with ID starting with 9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee not found: ID does not exist" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" Mar 20 08:34:57.383101 master-0 kubenswrapper[4041]: I0320 08:34:57.383057 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} err="failed to get container status \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": rpc error: code = NotFound desc = could not find container \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": container with ID starting with 9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee not found: ID does not exist" Mar 20 08:34:57.383101 master-0 kubenswrapper[4041]: I0320 08:34:57.383081 4041 scope.go:117] "RemoveContainer" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.383748 master-0 kubenswrapper[4041]: I0320 08:34:57.383714 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} err="failed to get container status \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": rpc error: code = NotFound desc = could not find container \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": container with ID starting with 2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25 not found: ID does not exist" Mar 20 08:34:57.383802 master-0 kubenswrapper[4041]: I0320 08:34:57.383748 4041 scope.go:117] "RemoveContainer" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.384059 master-0 kubenswrapper[4041]: I0320 08:34:57.384037 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} err="failed to get container status \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": rpc error: code = NotFound desc = could not find container \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": container with ID starting with 688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7 not found: ID does not exist" Mar 20 08:34:57.384112 master-0 kubenswrapper[4041]: I0320 08:34:57.384060 4041 scope.go:117] "RemoveContainer" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.384516 master-0 kubenswrapper[4041]: I0320 08:34:57.384495 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} err="failed to get container status \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": rpc error: code = NotFound desc = could not find container \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": container with ID starting with 6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19 not found: ID does not exist" Mar 20 08:34:57.384570 master-0 kubenswrapper[4041]: I0320 08:34:57.384516 4041 scope.go:117] "RemoveContainer" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.384841 master-0 kubenswrapper[4041]: I0320 08:34:57.384805 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} err="failed to get container status \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": rpc error: code = NotFound desc = could not find container \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": container with ID starting with 4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506 not found: ID does not exist" Mar 20 08:34:57.384841 master-0 kubenswrapper[4041]: I0320 08:34:57.384827 4041 scope.go:117] "RemoveContainer" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.385088 master-0 kubenswrapper[4041]: I0320 08:34:57.385065 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} err="failed to get container status \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": rpc error: code = NotFound desc = could not find container \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": container with ID starting with 1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b not found: ID does not exist" Mar 20 08:34:57.385088 master-0 kubenswrapper[4041]: I0320 08:34:57.385083 4041 scope.go:117] "RemoveContainer" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.385411 master-0 kubenswrapper[4041]: I0320 08:34:57.385384 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} err="failed to get container status \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": rpc error: code = NotFound desc = could not find container \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": container with ID starting with ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6 not found: ID does not exist" Mar 20 08:34:57.385411 master-0 kubenswrapper[4041]: I0320 08:34:57.385406 4041 scope.go:117] "RemoveContainer" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" Mar 20 08:34:57.385807 master-0 kubenswrapper[4041]: I0320 08:34:57.385753 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} err="failed to get container status \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": rpc error: code = NotFound desc = could not find container \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": container with ID starting with 027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79 not found: ID does not exist" Mar 20 08:34:57.385854 master-0 kubenswrapper[4041]: I0320 08:34:57.385820 4041 scope.go:117] "RemoveContainer" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" Mar 20 08:34:57.386171 master-0 kubenswrapper[4041]: I0320 08:34:57.386147 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} err="failed to get container status \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": rpc error: code = NotFound desc = could not find container \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": container with ID starting with 5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f not found: ID does not exist" Mar 20 08:34:57.386216 master-0 kubenswrapper[4041]: I0320 08:34:57.386171 4041 scope.go:117] "RemoveContainer" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" Mar 20 08:34:57.386762 master-0 kubenswrapper[4041]: I0320 08:34:57.386692 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} err="failed to get container status \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": rpc error: code = NotFound desc = could not find container \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": container with ID starting with 9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee not found: ID does not exist" Mar 20 08:34:57.386821 master-0 kubenswrapper[4041]: I0320 08:34:57.386763 4041 scope.go:117] "RemoveContainer" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.387208 master-0 kubenswrapper[4041]: I0320 08:34:57.387178 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} err="failed to get container status \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": rpc error: code = NotFound desc = could not find container \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": container with ID starting with 2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25 not found: ID does not exist" Mar 20 08:34:57.387241 master-0 kubenswrapper[4041]: I0320 08:34:57.387207 4041 scope.go:117] "RemoveContainer" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.387714 master-0 kubenswrapper[4041]: I0320 08:34:57.387680 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} err="failed to get container status \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": rpc error: code = NotFound desc = could not find container \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": container with ID starting with 688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7 not found: ID does not exist" Mar 20 08:34:57.387714 master-0 kubenswrapper[4041]: I0320 08:34:57.387708 4041 scope.go:117] "RemoveContainer" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.388106 master-0 kubenswrapper[4041]: I0320 08:34:57.388081 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} err="failed to get container status \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": rpc error: code = NotFound desc = could not find container \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": container with ID starting with 6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19 not found: ID does not exist" Mar 20 08:34:57.388106 master-0 kubenswrapper[4041]: I0320 08:34:57.388101 4041 scope.go:117] "RemoveContainer" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.388746 master-0 kubenswrapper[4041]: I0320 08:34:57.388712 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} err="failed to get container status \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": rpc error: code = NotFound desc = could not find container \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": container with ID starting with 4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506 not found: ID does not exist" Mar 20 08:34:57.388746 master-0 kubenswrapper[4041]: I0320 08:34:57.388738 4041 scope.go:117] "RemoveContainer" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.389096 master-0 kubenswrapper[4041]: I0320 08:34:57.389070 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} err="failed to get container status \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": rpc error: code = NotFound desc = could not find container \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": container with ID starting with 1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b not found: ID does not exist" Mar 20 08:34:57.389096 master-0 kubenswrapper[4041]: I0320 08:34:57.389090 4041 scope.go:117] "RemoveContainer" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.389510 master-0 kubenswrapper[4041]: I0320 08:34:57.389466 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} err="failed to get container status \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": rpc error: code = NotFound desc = could not find container \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": container with ID starting with ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6 not found: ID does not exist" Mar 20 08:34:57.389510 master-0 kubenswrapper[4041]: I0320 08:34:57.389500 4041 scope.go:117] "RemoveContainer" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" Mar 20 08:34:57.389890 master-0 kubenswrapper[4041]: I0320 08:34:57.389861 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} err="failed to get container status \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": rpc error: code = NotFound desc = could not find container \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": container with ID starting with 027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79 not found: ID does not exist" Mar 20 08:34:57.389925 master-0 kubenswrapper[4041]: I0320 08:34:57.389888 4041 scope.go:117] "RemoveContainer" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" Mar 20 08:34:57.390209 master-0 kubenswrapper[4041]: I0320 08:34:57.390187 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} err="failed to get container status \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": rpc error: code = NotFound desc = could not find container \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": container with ID starting with 5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f not found: ID does not exist" Mar 20 08:34:57.390239 master-0 kubenswrapper[4041]: I0320 08:34:57.390211 4041 scope.go:117] "RemoveContainer" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" Mar 20 08:34:57.390874 master-0 kubenswrapper[4041]: I0320 08:34:57.390535 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} err="failed to get container status \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": rpc error: code = NotFound desc = could not find container \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": container with ID starting with 9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee not found: ID does not exist" Mar 20 08:34:57.390874 master-0 kubenswrapper[4041]: I0320 08:34:57.390560 4041 scope.go:117] "RemoveContainer" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.390988 master-0 kubenswrapper[4041]: I0320 08:34:57.390908 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} err="failed to get container status \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": rpc error: code = NotFound desc = could not find container \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": container with ID starting with 2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25 not found: ID does not exist" Mar 20 08:34:57.390988 master-0 kubenswrapper[4041]: I0320 08:34:57.390949 4041 scope.go:117] "RemoveContainer" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.391466 master-0 kubenswrapper[4041]: I0320 08:34:57.391431 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} err="failed to get container status \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": rpc error: code = NotFound desc = could not find container \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": container with ID starting with 688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7 not found: ID does not exist" Mar 20 08:34:57.391466 master-0 kubenswrapper[4041]: I0320 08:34:57.391462 4041 scope.go:117] "RemoveContainer" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.392093 master-0 kubenswrapper[4041]: I0320 08:34:57.392075 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} err="failed to get container status \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": rpc error: code = NotFound desc = could not find container \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": container with ID starting with 6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19 not found: ID does not exist" Mar 20 08:34:57.392149 master-0 kubenswrapper[4041]: I0320 08:34:57.392092 4041 scope.go:117] "RemoveContainer" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.392634 master-0 kubenswrapper[4041]: I0320 08:34:57.392572 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} err="failed to get container status \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": rpc error: code = NotFound desc = could not find container \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": container with ID starting with 4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506 not found: ID does not exist" Mar 20 08:34:57.392689 master-0 kubenswrapper[4041]: I0320 08:34:57.392631 4041 scope.go:117] "RemoveContainer" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.393053 master-0 kubenswrapper[4041]: I0320 08:34:57.392991 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} err="failed to get container status \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": rpc error: code = NotFound desc = could not find container \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": container with ID starting with 1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b not found: ID does not exist" Mar 20 08:34:57.393053 master-0 kubenswrapper[4041]: I0320 08:34:57.393018 4041 scope.go:117] "RemoveContainer" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.393461 master-0 kubenswrapper[4041]: I0320 08:34:57.393425 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} err="failed to get container status \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": rpc error: code = NotFound desc = could not find container \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": container with ID starting with ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6 not found: ID does not exist" Mar 20 08:34:57.393461 master-0 kubenswrapper[4041]: I0320 08:34:57.393454 4041 scope.go:117] "RemoveContainer" containerID="027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79" Mar 20 08:34:57.393879 master-0 kubenswrapper[4041]: I0320 08:34:57.393822 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79"} err="failed to get container status \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": rpc error: code = NotFound desc = could not find container \"027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79\": container with ID starting with 027ee159cf4c84c1249b8b3a8b23bb814d8ae496d4ab9df979f3560b39dc2d79 not found: ID does not exist" Mar 20 08:34:57.393950 master-0 kubenswrapper[4041]: I0320 08:34:57.393863 4041 scope.go:117] "RemoveContainer" containerID="5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f" Mar 20 08:34:57.394246 master-0 kubenswrapper[4041]: I0320 08:34:57.394214 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f"} err="failed to get container status \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": rpc error: code = NotFound desc = could not find container \"5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f\": container with ID starting with 5bfcae3f91be7fe1ddfb8a13551bb3dff32c9a636de438a89b4efe379f02514f not found: ID does not exist" Mar 20 08:34:57.394246 master-0 kubenswrapper[4041]: I0320 08:34:57.394235 4041 scope.go:117] "RemoveContainer" containerID="9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee" Mar 20 08:34:57.394885 master-0 kubenswrapper[4041]: I0320 08:34:57.394831 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee"} err="failed to get container status \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": rpc error: code = NotFound desc = could not find container \"9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee\": container with ID starting with 9d8b5757186b0abb0356c7dc8c0950266458da6c9fd6de30efea532b963f1eee not found: ID does not exist" Mar 20 08:34:57.394940 master-0 kubenswrapper[4041]: I0320 08:34:57.394897 4041 scope.go:117] "RemoveContainer" containerID="2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25" Mar 20 08:34:57.395826 master-0 kubenswrapper[4041]: I0320 08:34:57.395786 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25"} err="failed to get container status \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": rpc error: code = NotFound desc = could not find container \"2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25\": container with ID starting with 2eb2017a2994f831033d2232c2f0c55ec60ee11e7a5841d1158c8dc98a064f25 not found: ID does not exist" Mar 20 08:34:57.395860 master-0 kubenswrapper[4041]: I0320 08:34:57.395823 4041 scope.go:117] "RemoveContainer" containerID="688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7" Mar 20 08:34:57.397039 master-0 kubenswrapper[4041]: I0320 08:34:57.396989 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7"} err="failed to get container status \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": rpc error: code = NotFound desc = could not find container \"688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7\": container with ID starting with 688d6f88f24a295dfffbab4712080c913cab447392ca97ab4f10a450098c9db7 not found: ID does not exist" Mar 20 08:34:57.397094 master-0 kubenswrapper[4041]: I0320 08:34:57.397038 4041 scope.go:117] "RemoveContainer" containerID="6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19" Mar 20 08:34:57.397590 master-0 kubenswrapper[4041]: I0320 08:34:57.397549 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19"} err="failed to get container status \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": rpc error: code = NotFound desc = could not find container \"6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19\": container with ID starting with 6f2f0260b2d0745477681ebf3c6bf6ff162a5ba0fb12ca8c5656caa8d62aff19 not found: ID does not exist" Mar 20 08:34:57.397590 master-0 kubenswrapper[4041]: I0320 08:34:57.397580 4041 scope.go:117] "RemoveContainer" containerID="4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506" Mar 20 08:34:57.398354 master-0 kubenswrapper[4041]: I0320 08:34:57.398308 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506"} err="failed to get container status \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": rpc error: code = NotFound desc = could not find container \"4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506\": container with ID starting with 4d65e10a1397dda90a3f09b3185de8aba8a60549930df2407ef2e9d35c63a506 not found: ID does not exist" Mar 20 08:34:57.398404 master-0 kubenswrapper[4041]: I0320 08:34:57.398351 4041 scope.go:117] "RemoveContainer" containerID="1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b" Mar 20 08:34:57.398908 master-0 kubenswrapper[4041]: I0320 08:34:57.398866 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b"} err="failed to get container status \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": rpc error: code = NotFound desc = could not find container \"1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b\": container with ID starting with 1d774aa342629f7fdb30a968bab519fd61ee4233479481d4a4bafcb2dc9bf15b not found: ID does not exist" Mar 20 08:34:57.398908 master-0 kubenswrapper[4041]: I0320 08:34:57.398896 4041 scope.go:117] "RemoveContainer" containerID="ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6" Mar 20 08:34:57.399313 master-0 kubenswrapper[4041]: I0320 08:34:57.399237 4041 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6"} err="failed to get container status \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": rpc error: code = NotFound desc = could not find container \"ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6\": container with ID starting with ffcb70bf7040080fb961520354fdd652be8ba5f4894e64db3333169f16167de6 not found: ID does not exist" Mar 20 08:34:57.542437 master-0 kubenswrapper[4041]: I0320 08:34:57.542339 4041 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="669651d1-cadf-4a5c-bed8-6ff2107774f8" path="/var/lib/kubelet/pods/669651d1-cadf-4a5c-bed8-6ff2107774f8/volumes" Mar 20 08:34:58.186563 master-0 kubenswrapper[4041]: I0320 08:34:58.186470 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"1ce54a5590826875560432ea744f2460f27a35494ad527707d35fa0bc9c9518f"} Mar 20 08:34:58.187427 master-0 kubenswrapper[4041]: I0320 08:34:58.186574 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"71492ac2213cc400e251902d25ef6b6543e9174f35a2747f77655dffa54c98ae"} Mar 20 08:34:58.187427 master-0 kubenswrapper[4041]: I0320 08:34:58.186598 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"b8db8080b200536a9377078616125baf2af90c4794ebd829d7c5733866acceb3"} Mar 20 08:34:58.187427 master-0 kubenswrapper[4041]: I0320 08:34:58.186616 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"ffad94ed7dd07d28c05d487c0a64bf6261be7c124b5aa2806f67a670c439c855"} Mar 20 08:34:58.187427 master-0 kubenswrapper[4041]: I0320 08:34:58.186633 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"cbb0777d86fe8aef7b45d0b9716a093118e993114d1cf5dd7c366faf98e23cf2"} Mar 20 08:34:58.187427 master-0 kubenswrapper[4041]: I0320 08:34:58.186651 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"8126274bfe0fc18cfc9cd1bb527f1c5098c2b15352a76b2b0bde84131edc6361"} Mar 20 08:34:58.535640 master-0 kubenswrapper[4041]: I0320 08:34:58.535423 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:34:58.535871 master-0 kubenswrapper[4041]: E0320 08:34:58.535653 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:34:58.535871 master-0 kubenswrapper[4041]: I0320 08:34:58.535716 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:58.535987 master-0 kubenswrapper[4041]: E0320 08:34:58.535874 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:34:59.184042 master-0 kubenswrapper[4041]: I0320 08:34:59.183903 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:34:59.184408 master-0 kubenswrapper[4041]: E0320 08:34:59.184181 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 08:34:59.184408 master-0 kubenswrapper[4041]: E0320 08:34:59.184224 4041 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 08:34:59.184408 master-0 kubenswrapper[4041]: E0320 08:34:59.184248 4041 projected.go:194] Error preparing data for projected volume kube-api-access-nf5kc for pod openshift-network-diagnostics/network-check-target-j9jjm: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:34:59.184408 master-0 kubenswrapper[4041]: E0320 08:34:59.184382 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc podName:ca6e644f-c53b-41dd-a16f-9fb9997533dd nodeName:}" failed. No retries permitted until 2026-03-20 08:35:31.184353275 +0000 UTC m=+138.434698830 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-nf5kc" (UniqueName: "kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc") pod "network-check-target-j9jjm" (UID: "ca6e644f-c53b-41dd-a16f-9fb9997533dd") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 08:35:00.198562 master-0 kubenswrapper[4041]: I0320 08:35:00.198470 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"5532c9fde716f197f1ceb62814f4dd124f4c022d390fb2e9bb6de856aae50715"} Mar 20 08:35:00.535735 master-0 kubenswrapper[4041]: I0320 08:35:00.535355 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:00.535945 master-0 kubenswrapper[4041]: I0320 08:35:00.535442 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:00.535945 master-0 kubenswrapper[4041]: E0320 08:35:00.535875 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:35:00.536209 master-0 kubenswrapper[4041]: E0320 08:35:00.536156 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:35:02.535642 master-0 kubenswrapper[4041]: I0320 08:35:02.535569 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:02.535642 master-0 kubenswrapper[4041]: I0320 08:35:02.535620 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:02.536485 master-0 kubenswrapper[4041]: E0320 08:35:02.535720 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:35:02.536485 master-0 kubenswrapper[4041]: E0320 08:35:02.535801 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:35:03.213033 master-0 kubenswrapper[4041]: I0320 08:35:03.212939 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"cc2e4ad2be0fc6a3ab9cf5e8dc60f935e01ae59dcef65e15f3ad03bac2eff189"} Mar 20 08:35:03.213558 master-0 kubenswrapper[4041]: I0320 08:35:03.213461 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:03.213558 master-0 kubenswrapper[4041]: I0320 08:35:03.213528 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:03.213558 master-0 kubenswrapper[4041]: I0320 08:35:03.213542 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:03.246008 master-0 kubenswrapper[4041]: I0320 08:35:03.245456 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:03.246493 master-0 kubenswrapper[4041]: I0320 08:35:03.246448 4041 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:03.261342 master-0 kubenswrapper[4041]: I0320 08:35:03.260975 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" podStartSLOduration=7.260952555 podStartE2EDuration="7.260952555s" podCreationTimestamp="2026-03-20 08:34:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:03.25342141 +0000 UTC m=+110.503766925" watchObservedRunningTime="2026-03-20 08:35:03.260952555 +0000 UTC m=+110.511298060" Mar 20 08:35:03.261553 master-0 kubenswrapper[4041]: I0320 08:35:03.261426 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-j9jjm"] Mar 20 08:35:03.261553 master-0 kubenswrapper[4041]: I0320 08:35:03.261509 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:03.261630 master-0 kubenswrapper[4041]: E0320 08:35:03.261593 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:35:03.267949 master-0 kubenswrapper[4041]: I0320 08:35:03.267889 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nfrth"] Mar 20 08:35:03.268168 master-0 kubenswrapper[4041]: I0320 08:35:03.268016 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:03.268168 master-0 kubenswrapper[4041]: E0320 08:35:03.268121 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:35:04.535250 master-0 kubenswrapper[4041]: I0320 08:35:04.535187 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:04.536062 master-0 kubenswrapper[4041]: E0320 08:35:04.535354 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:35:05.535951 master-0 kubenswrapper[4041]: I0320 08:35:05.535578 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:05.536444 master-0 kubenswrapper[4041]: E0320 08:35:05.536033 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:35:06.535602 master-0 kubenswrapper[4041]: I0320 08:35:06.535501 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:06.535790 master-0 kubenswrapper[4041]: E0320 08:35:06.535726 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-nfrth" podUID="00350ac7-b40a-4459-b94c-a37d7b613645" Mar 20 08:35:07.534913 master-0 kubenswrapper[4041]: I0320 08:35:07.534824 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:07.535688 master-0 kubenswrapper[4041]: E0320 08:35:07.535030 4041 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-j9jjm" podUID="ca6e644f-c53b-41dd-a16f-9fb9997533dd" Mar 20 08:35:08.089192 master-0 kubenswrapper[4041]: I0320 08:35:08.089044 4041 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 20 08:35:08.089484 master-0 kubenswrapper[4041]: I0320 08:35:08.089359 4041 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 20 08:35:08.134027 master-0 kubenswrapper[4041]: I0320 08:35:08.133966 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-dknxr"] Mar 20 08:35:08.134376 master-0 kubenswrapper[4041]: I0320 08:35:08.134334 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.137991 master-0 kubenswrapper[4041]: I0320 08:35:08.137930 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 08:35:08.138155 master-0 kubenswrapper[4041]: I0320 08:35:08.138054 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd"] Mar 20 08:35:08.138247 master-0 kubenswrapper[4041]: I0320 08:35:08.138188 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6"] Mar 20 08:35:08.138409 master-0 kubenswrapper[4041]: I0320 08:35:08.138375 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.138506 master-0 kubenswrapper[4041]: I0320 08:35:08.138413 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.138685 master-0 kubenswrapper[4041]: I0320 08:35:08.138649 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 08:35:08.140582 master-0 kubenswrapper[4041]: I0320 08:35:08.140539 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 20 08:35:08.142316 master-0 kubenswrapper[4041]: I0320 08:35:08.142288 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.144228 master-0 kubenswrapper[4041]: I0320 08:35:08.144183 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 20 08:35:08.144502 master-0 kubenswrapper[4041]: I0320 08:35:08.144468 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 08:35:08.144658 master-0 kubenswrapper[4041]: I0320 08:35:08.144593 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 08:35:08.144771 master-0 kubenswrapper[4041]: I0320 08:35:08.144735 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 20 08:35:08.145042 master-0 kubenswrapper[4041]: I0320 08:35:08.144984 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 20 08:35:08.146684 master-0 kubenswrapper[4041]: I0320 08:35:08.146648 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.151640 master-0 kubenswrapper[4041]: I0320 08:35:08.151589 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98"] Mar 20 08:35:08.151888 master-0 kubenswrapper[4041]: I0320 08:35:08.151838 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.152442 master-0 kubenswrapper[4041]: I0320 08:35:08.152406 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc"] Mar 20 08:35:08.152635 master-0 kubenswrapper[4041]: I0320 08:35:08.152615 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.154944 master-0 kubenswrapper[4041]: I0320 08:35:08.153326 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6"] Mar 20 08:35:08.154944 master-0 kubenswrapper[4041]: I0320 08:35:08.153998 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:08.154944 master-0 kubenswrapper[4041]: I0320 08:35:08.154444 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q"] Mar 20 08:35:08.154944 master-0 kubenswrapper[4041]: I0320 08:35:08.154733 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.154944 master-0 kubenswrapper[4041]: I0320 08:35:08.154887 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2"] Mar 20 08:35:08.155506 master-0 kubenswrapper[4041]: I0320 08:35:08.155456 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.164052 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk"] Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.165394 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166544 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166621 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166692 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166743 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166786 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166832 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166897 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.166946 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.167037 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.167315 master-0 kubenswrapper[4041]: I0320 08:35:08.167089 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.170257 master-0 kubenswrapper[4041]: I0320 08:35:08.170203 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.170690 master-0 kubenswrapper[4041]: I0320 08:35:08.170626 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 08:35:08.170799 master-0 kubenswrapper[4041]: I0320 08:35:08.170763 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 08:35:08.170958 master-0 kubenswrapper[4041]: I0320 08:35:08.170913 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq"] Mar 20 08:35:08.171065 master-0 kubenswrapper[4041]: I0320 08:35:08.170973 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 08:35:08.171218 master-0 kubenswrapper[4041]: I0320 08:35:08.171178 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 08:35:08.171621 master-0 kubenswrapper[4041]: I0320 08:35:08.171570 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.185978 master-0 kubenswrapper[4041]: I0320 08:35:08.185923 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 20 08:35:08.186433 master-0 kubenswrapper[4041]: I0320 08:35:08.186402 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 20 08:35:08.186620 master-0 kubenswrapper[4041]: I0320 08:35:08.186592 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 08:35:08.187110 master-0 kubenswrapper[4041]: I0320 08:35:08.187061 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl"] Mar 20 08:35:08.187867 master-0 kubenswrapper[4041]: I0320 08:35:08.187839 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq"] Mar 20 08:35:08.188361 master-0 kubenswrapper[4041]: I0320 08:35:08.188334 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-j84r8"] Mar 20 08:35:08.188841 master-0 kubenswrapper[4041]: I0320 08:35:08.188815 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-xfns6"] Mar 20 08:35:08.189573 master-0 kubenswrapper[4041]: I0320 08:35:08.189546 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t"] Mar 20 08:35:08.190030 master-0 kubenswrapper[4041]: I0320 08:35:08.188359 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:35:08.190133 master-0 kubenswrapper[4041]: I0320 08:35:08.190092 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.190279 master-0 kubenswrapper[4041]: I0320 08:35:08.190229 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.190372 master-0 kubenswrapper[4041]: I0320 08:35:08.188496 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 20 08:35:08.190424 master-0 kubenswrapper[4041]: I0320 08:35:08.189897 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 08:35:08.190484 master-0 kubenswrapper[4041]: I0320 08:35:08.189988 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 08:35:08.190577 master-0 kubenswrapper[4041]: I0320 08:35:08.190559 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.190658 master-0 kubenswrapper[4041]: I0320 08:35:08.190630 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr"] Mar 20 08:35:08.190724 master-0 kubenswrapper[4041]: I0320 08:35:08.190573 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 08:35:08.191325 master-0 kubenswrapper[4041]: I0320 08:35:08.191305 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.191478 master-0 kubenswrapper[4041]: I0320 08:35:08.191459 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.195533 master-0 kubenswrapper[4041]: I0320 08:35:08.194844 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 08:35:08.195533 master-0 kubenswrapper[4041]: I0320 08:35:08.194980 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.195792 master-0 kubenswrapper[4041]: I0320 08:35:08.195744 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.196124 master-0 kubenswrapper[4041]: I0320 08:35:08.196089 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 08:35:08.196124 master-0 kubenswrapper[4041]: I0320 08:35:08.196110 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.196255 master-0 kubenswrapper[4041]: I0320 08:35:08.196203 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 08:35:08.196255 master-0 kubenswrapper[4041]: I0320 08:35:08.196221 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.196384 master-0 kubenswrapper[4041]: I0320 08:35:08.196313 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 20 08:35:08.196384 master-0 kubenswrapper[4041]: I0320 08:35:08.196334 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 08:35:08.196476 master-0 kubenswrapper[4041]: I0320 08:35:08.196404 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 20 08:35:08.196476 master-0 kubenswrapper[4041]: I0320 08:35:08.196411 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 20 08:35:08.196476 master-0 kubenswrapper[4041]: I0320 08:35:08.196474 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 20 08:35:08.196697 master-0 kubenswrapper[4041]: I0320 08:35:08.196488 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 20 08:35:08.197564 master-0 kubenswrapper[4041]: I0320 08:35:08.197538 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 08:35:08.197839 master-0 kubenswrapper[4041]: I0320 08:35:08.197813 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24"] Mar 20 08:35:08.199323 master-0 kubenswrapper[4041]: I0320 08:35:08.198439 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt"] Mar 20 08:35:08.199826 master-0 kubenswrapper[4041]: I0320 08:35:08.199800 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp"] Mar 20 08:35:08.200422 master-0 kubenswrapper[4041]: I0320 08:35:08.200380 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.200667 master-0 kubenswrapper[4041]: I0320 08:35:08.200643 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm"] Mar 20 08:35:08.200828 master-0 kubenswrapper[4041]: I0320 08:35:08.200799 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.201207 master-0 kubenswrapper[4041]: I0320 08:35:08.201178 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.201502 master-0 kubenswrapper[4041]: I0320 08:35:08.201477 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.202214 master-0 kubenswrapper[4041]: I0320 08:35:08.202193 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs"] Mar 20 08:35:08.202692 master-0 kubenswrapper[4041]: I0320 08:35:08.202669 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx"] Mar 20 08:35:08.208824 master-0 kubenswrapper[4041]: I0320 08:35:08.203192 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.209904 master-0 kubenswrapper[4041]: I0320 08:35:08.203198 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf"] Mar 20 08:35:08.210811 master-0 kubenswrapper[4041]: I0320 08:35:08.210778 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-dknxr"] Mar 20 08:35:08.210949 master-0 kubenswrapper[4041]: I0320 08:35:08.210930 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6"] Mar 20 08:35:08.211091 master-0 kubenswrapper[4041]: I0320 08:35:08.211071 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd"] Mar 20 08:35:08.211201 master-0 kubenswrapper[4041]: I0320 08:35:08.211182 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98"] Mar 20 08:35:08.211446 master-0 kubenswrapper[4041]: I0320 08:35:08.205218 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 20 08:35:08.211940 master-0 kubenswrapper[4041]: I0320 08:35:08.211904 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 08:35:08.212126 master-0 kubenswrapper[4041]: I0320 08:35:08.208038 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.212205 master-0 kubenswrapper[4041]: I0320 08:35:08.203245 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.212466 master-0 kubenswrapper[4041]: I0320 08:35:08.212436 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.212638 master-0 kubenswrapper[4041]: I0320 08:35:08.208081 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 08:35:08.212838 master-0 kubenswrapper[4041]: I0320 08:35:08.208201 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 08:35:08.212915 master-0 kubenswrapper[4041]: I0320 08:35:08.208305 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 08:35:08.213290 master-0 kubenswrapper[4041]: I0320 08:35:08.213235 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 08:35:08.213601 master-0 kubenswrapper[4041]: I0320 08:35:08.213572 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.213681 master-0 kubenswrapper[4041]: I0320 08:35:08.213668 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 08:35:08.213776 master-0 kubenswrapper[4041]: I0320 08:35:08.213749 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 08:35:08.213837 master-0 kubenswrapper[4041]: I0320 08:35:08.213777 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 08:35:08.213837 master-0 kubenswrapper[4041]: I0320 08:35:08.213830 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 08:35:08.213932 master-0 kubenswrapper[4041]: I0320 08:35:08.213883 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 08:35:08.216219 master-0 kubenswrapper[4041]: I0320 08:35:08.216165 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 08:35:08.216372 master-0 kubenswrapper[4041]: I0320 08:35:08.216225 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 08:35:08.216372 master-0 kubenswrapper[4041]: I0320 08:35:08.216306 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 20 08:35:08.216510 master-0 kubenswrapper[4041]: I0320 08:35:08.216387 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 08:35:08.216816 master-0 kubenswrapper[4041]: I0320 08:35:08.216784 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 08:35:08.216911 master-0 kubenswrapper[4041]: I0320 08:35:08.216895 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 08:35:08.217152 master-0 kubenswrapper[4041]: I0320 08:35:08.217128 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 08:35:08.217405 master-0 kubenswrapper[4041]: I0320 08:35:08.217379 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.217525 master-0 kubenswrapper[4041]: I0320 08:35:08.217501 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.217626 master-0 kubenswrapper[4041]: I0320 08:35:08.217603 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 08:35:08.217722 master-0 kubenswrapper[4041]: I0320 08:35:08.217697 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 08:35:08.217781 master-0 kubenswrapper[4041]: I0320 08:35:08.217741 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 08:35:08.217899 master-0 kubenswrapper[4041]: I0320 08:35:08.217875 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 08:35:08.217960 master-0 kubenswrapper[4041]: I0320 08:35:08.217948 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.218012 master-0 kubenswrapper[4041]: I0320 08:35:08.217974 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:35:08.218075 master-0 kubenswrapper[4041]: I0320 08:35:08.218026 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 08:35:08.218075 master-0 kubenswrapper[4041]: I0320 08:35:08.218058 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 08:35:08.218075 master-0 kubenswrapper[4041]: I0320 08:35:08.217706 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 08:35:08.218221 master-0 kubenswrapper[4041]: I0320 08:35:08.218179 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 08:35:08.218293 master-0 kubenswrapper[4041]: I0320 08:35:08.218252 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 20 08:35:08.218350 master-0 kubenswrapper[4041]: I0320 08:35:08.218307 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6"] Mar 20 08:35:08.218350 master-0 kubenswrapper[4041]: I0320 08:35:08.218340 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-j84r8"] Mar 20 08:35:08.218461 master-0 kubenswrapper[4041]: I0320 08:35:08.218353 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q"] Mar 20 08:35:08.218639 master-0 kubenswrapper[4041]: I0320 08:35:08.218614 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:35:08.228531 master-0 kubenswrapper[4041]: I0320 08:35:08.228485 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 08:35:08.229036 master-0 kubenswrapper[4041]: I0320 08:35:08.229013 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 08:35:08.230442 master-0 kubenswrapper[4041]: I0320 08:35:08.230418 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:35:08.230771 master-0 kubenswrapper[4041]: I0320 08:35:08.230751 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 08:35:08.231116 master-0 kubenswrapper[4041]: I0320 08:35:08.231096 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 08:35:08.232368 master-0 kubenswrapper[4041]: I0320 08:35:08.232330 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 08:35:08.233312 master-0 kubenswrapper[4041]: I0320 08:35:08.233283 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 08:35:08.235110 master-0 kubenswrapper[4041]: I0320 08:35:08.234049 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 08:35:08.235280 master-0 kubenswrapper[4041]: I0320 08:35:08.228680 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 08:35:08.238777 master-0 kubenswrapper[4041]: I0320 08:35:08.238711 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr"] Mar 20 08:35:08.238777 master-0 kubenswrapper[4041]: I0320 08:35:08.238749 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t"] Mar 20 08:35:08.240943 master-0 kubenswrapper[4041]: I0320 08:35:08.240833 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq"] Mar 20 08:35:08.243480 master-0 kubenswrapper[4041]: I0320 08:35:08.243305 4041 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-9xlf2"] Mar 20 08:35:08.248169 master-0 kubenswrapper[4041]: I0320 08:35:08.248136 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc"] Mar 20 08:35:08.248169 master-0 kubenswrapper[4041]: I0320 08:35:08.248167 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt"] Mar 20 08:35:08.248393 master-0 kubenswrapper[4041]: I0320 08:35:08.248240 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 08:35:08.248393 master-0 kubenswrapper[4041]: I0320 08:35:08.248247 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.250595 master-0 kubenswrapper[4041]: I0320 08:35:08.249762 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk"] Mar 20 08:35:08.250595 master-0 kubenswrapper[4041]: I0320 08:35:08.249991 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 08:35:08.251027 master-0 kubenswrapper[4041]: I0320 08:35:08.251004 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp"] Mar 20 08:35:08.251824 master-0 kubenswrapper[4041]: I0320 08:35:08.251793 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf"] Mar 20 08:35:08.253655 master-0 kubenswrapper[4041]: I0320 08:35:08.253624 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 08:35:08.254131 master-0 kubenswrapper[4041]: I0320 08:35:08.254103 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx"] Mar 20 08:35:08.254179 master-0 kubenswrapper[4041]: I0320 08:35:08.254136 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2"] Mar 20 08:35:08.255310 master-0 kubenswrapper[4041]: I0320 08:35:08.255276 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-xfns6"] Mar 20 08:35:08.255972 master-0 kubenswrapper[4041]: I0320 08:35:08.255942 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24"] Mar 20 08:35:08.256037 master-0 kubenswrapper[4041]: I0320 08:35:08.255996 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 08:35:08.258824 master-0 kubenswrapper[4041]: I0320 08:35:08.258789 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq"] Mar 20 08:35:08.259351 master-0 kubenswrapper[4041]: I0320 08:35:08.259324 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl"] Mar 20 08:35:08.261609 master-0 kubenswrapper[4041]: I0320 08:35:08.261570 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm"] Mar 20 08:35:08.263598 master-0 kubenswrapper[4041]: I0320 08:35:08.263564 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs"] Mar 20 08:35:08.267651 master-0 kubenswrapper[4041]: I0320 08:35:08.267606 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.267651 master-0 kubenswrapper[4041]: I0320 08:35:08.267647 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.267823 master-0 kubenswrapper[4041]: I0320 08:35:08.267677 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.267823 master-0 kubenswrapper[4041]: I0320 08:35:08.267702 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.267823 master-0 kubenswrapper[4041]: I0320 08:35:08.267739 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.267823 master-0 kubenswrapper[4041]: I0320 08:35:08.267765 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.267823 master-0 kubenswrapper[4041]: I0320 08:35:08.267788 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.267823 master-0 kubenswrapper[4041]: I0320 08:35:08.267812 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267835 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267857 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267886 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267910 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267935 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267959 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.267982 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.268005 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.268039 master-0 kubenswrapper[4041]: I0320 08:35:08.268031 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268055 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268082 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268105 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268129 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268151 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268175 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268199 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268221 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268245 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268288 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268315 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268337 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.268378 master-0 kubenswrapper[4041]: I0320 08:35:08.268364 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: I0320 08:35:08.268391 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: I0320 08:35:08.268607 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: E0320 08:35:08.268635 4041 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: I0320 08:35:08.268686 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: E0320 08:35:08.268722 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.768702207 +0000 UTC m=+116.019047802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: I0320 08:35:08.268758 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.268849 master-0 kubenswrapper[4041]: I0320 08:35:08.268785 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.269112 master-0 kubenswrapper[4041]: I0320 08:35:08.269069 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.269154 master-0 kubenswrapper[4041]: I0320 08:35:08.269141 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:08.269227 master-0 kubenswrapper[4041]: E0320 08:35:08.269190 4041 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:08.269227 master-0 kubenswrapper[4041]: I0320 08:35:08.269200 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.269428 master-0 kubenswrapper[4041]: I0320 08:35:08.269401 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.269503 master-0 kubenswrapper[4041]: E0320 08:35:08.269485 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.769397494 +0000 UTC m=+116.019743079 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:08.269598 master-0 kubenswrapper[4041]: I0320 08:35:08.269579 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.269703 master-0 kubenswrapper[4041]: I0320 08:35:08.269684 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.269810 master-0 kubenswrapper[4041]: I0320 08:35:08.269795 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.269898 master-0 kubenswrapper[4041]: I0320 08:35:08.269883 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.269987 master-0 kubenswrapper[4041]: I0320 08:35:08.269972 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.270069 master-0 kubenswrapper[4041]: I0320 08:35:08.270055 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.270166 master-0 kubenswrapper[4041]: I0320 08:35:08.270150 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.270279 master-0 kubenswrapper[4041]: I0320 08:35:08.270245 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.270380 master-0 kubenswrapper[4041]: I0320 08:35:08.270362 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.270485 master-0 kubenswrapper[4041]: I0320 08:35:08.269607 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.270546 master-0 kubenswrapper[4041]: I0320 08:35:08.270461 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.270644 master-0 kubenswrapper[4041]: I0320 08:35:08.270626 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.270738 master-0 kubenswrapper[4041]: I0320 08:35:08.270701 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.270824 master-0 kubenswrapper[4041]: I0320 08:35:08.270807 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.270981 master-0 kubenswrapper[4041]: I0320 08:35:08.270964 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.271174 master-0 kubenswrapper[4041]: I0320 08:35:08.271156 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.271378 master-0 kubenswrapper[4041]: I0320 08:35:08.271360 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.271482 master-0 kubenswrapper[4041]: I0320 08:35:08.271465 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.271570 master-0 kubenswrapper[4041]: I0320 08:35:08.271555 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.271654 master-0 kubenswrapper[4041]: I0320 08:35:08.271637 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.271896 master-0 kubenswrapper[4041]: I0320 08:35:08.271879 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.272012 master-0 kubenswrapper[4041]: I0320 08:35:08.271975 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.272745 master-0 kubenswrapper[4041]: I0320 08:35:08.272699 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.272874 master-0 kubenswrapper[4041]: I0320 08:35:08.272858 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.272966 master-0 kubenswrapper[4041]: I0320 08:35:08.272952 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.273055 master-0 kubenswrapper[4041]: I0320 08:35:08.273041 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.273139 master-0 kubenswrapper[4041]: I0320 08:35:08.273125 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.273232 master-0 kubenswrapper[4041]: I0320 08:35:08.273215 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.273343 master-0 kubenswrapper[4041]: I0320 08:35:08.273327 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.273429 master-0 kubenswrapper[4041]: I0320 08:35:08.273415 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.273498 master-0 kubenswrapper[4041]: I0320 08:35:08.273485 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.273573 master-0 kubenswrapper[4041]: I0320 08:35:08.273556 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.273654 master-0 kubenswrapper[4041]: I0320 08:35:08.273642 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.273724 master-0 kubenswrapper[4041]: I0320 08:35:08.273711 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.273795 master-0 kubenswrapper[4041]: I0320 08:35:08.273784 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.273859 master-0 kubenswrapper[4041]: I0320 08:35:08.273849 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.273925 master-0 kubenswrapper[4041]: I0320 08:35:08.273915 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.274006 master-0 kubenswrapper[4041]: I0320 08:35:08.273993 4041 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.274877 master-0 kubenswrapper[4041]: I0320 08:35:08.274846 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.288299 master-0 kubenswrapper[4041]: I0320 08:35:08.283705 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.294062 master-0 kubenswrapper[4041]: I0320 08:35:08.294009 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.297064 master-0 kubenswrapper[4041]: I0320 08:35:08.295485 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.297064 master-0 kubenswrapper[4041]: I0320 08:35:08.296110 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.298552 master-0 kubenswrapper[4041]: I0320 08:35:08.298464 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.374753 master-0 kubenswrapper[4041]: I0320 08:35:08.374703 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.374753 master-0 kubenswrapper[4041]: I0320 08:35:08.374751 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.375006 master-0 kubenswrapper[4041]: I0320 08:35:08.374961 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.375075 master-0 kubenswrapper[4041]: I0320 08:35:08.375046 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.375335 master-0 kubenswrapper[4041]: I0320 08:35:08.375300 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.375378 master-0 kubenswrapper[4041]: I0320 08:35:08.375365 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.375424 master-0 kubenswrapper[4041]: I0320 08:35:08.375407 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.375465 master-0 kubenswrapper[4041]: I0320 08:35:08.375449 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.375493 master-0 kubenswrapper[4041]: I0320 08:35:08.375475 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:08.375522 master-0 kubenswrapper[4041]: I0320 08:35:08.375492 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.375522 master-0 kubenswrapper[4041]: I0320 08:35:08.375510 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.375575 master-0 kubenswrapper[4041]: I0320 08:35:08.375530 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.375575 master-0 kubenswrapper[4041]: I0320 08:35:08.375550 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.375575 master-0 kubenswrapper[4041]: I0320 08:35:08.375567 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.375653 master-0 kubenswrapper[4041]: I0320 08:35:08.375582 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.375653 master-0 kubenswrapper[4041]: I0320 08:35:08.375599 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.375653 master-0 kubenswrapper[4041]: I0320 08:35:08.375618 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.375653 master-0 kubenswrapper[4041]: I0320 08:35:08.375637 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.375753 master-0 kubenswrapper[4041]: E0320 08:35:08.375731 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:08.375806 master-0 kubenswrapper[4041]: E0320 08:35:08.375787 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.875771108 +0000 UTC m=+116.126116613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:08.375844 master-0 kubenswrapper[4041]: E0320 08:35:08.375809 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:08.375870 master-0 kubenswrapper[4041]: E0320 08:35:08.375848 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.875837359 +0000 UTC m=+116.126182864 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:08.375901 master-0 kubenswrapper[4041]: I0320 08:35:08.375877 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.375927 master-0 kubenswrapper[4041]: I0320 08:35:08.375895 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.376084 master-0 kubenswrapper[4041]: I0320 08:35:08.375951 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.376084 master-0 kubenswrapper[4041]: I0320 08:35:08.375737 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.376146 master-0 kubenswrapper[4041]: I0320 08:35:08.376001 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.376173 master-0 kubenswrapper[4041]: I0320 08:35:08.376143 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.376173 master-0 kubenswrapper[4041]: I0320 08:35:08.376170 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.376229 master-0 kubenswrapper[4041]: I0320 08:35:08.376199 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.376276 master-0 kubenswrapper[4041]: I0320 08:35:08.376233 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.376465 master-0 kubenswrapper[4041]: I0320 08:35:08.376445 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.376510 master-0 kubenswrapper[4041]: I0320 08:35:08.376446 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.376680 master-0 kubenswrapper[4041]: I0320 08:35:08.376657 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.376726 master-0 kubenswrapper[4041]: E0320 08:35:08.376659 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:08.376760 master-0 kubenswrapper[4041]: E0320 08:35:08.376725 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.876711333 +0000 UTC m=+116.127056968 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:08.376845 master-0 kubenswrapper[4041]: I0320 08:35:08.376256 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.376927 master-0 kubenswrapper[4041]: I0320 08:35:08.376904 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.376971 master-0 kubenswrapper[4041]: I0320 08:35:08.376941 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.377006 master-0 kubenswrapper[4041]: I0320 08:35:08.376972 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.377006 master-0 kubenswrapper[4041]: I0320 08:35:08.376998 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.377080 master-0 kubenswrapper[4041]: I0320 08:35:08.377026 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.377080 master-0 kubenswrapper[4041]: I0320 08:35:08.377066 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.377137 master-0 kubenswrapper[4041]: I0320 08:35:08.377092 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.377137 master-0 kubenswrapper[4041]: I0320 08:35:08.377119 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.377186 master-0 kubenswrapper[4041]: I0320 08:35:08.377146 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.377186 master-0 kubenswrapper[4041]: I0320 08:35:08.377178 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.377239 master-0 kubenswrapper[4041]: I0320 08:35:08.377205 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.377239 master-0 kubenswrapper[4041]: I0320 08:35:08.377230 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.377382 master-0 kubenswrapper[4041]: I0320 08:35:08.377327 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.377450 master-0 kubenswrapper[4041]: E0320 08:35:08.377429 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:08.377542 master-0 kubenswrapper[4041]: I0320 08:35:08.377524 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.377757 master-0 kubenswrapper[4041]: E0320 08:35:08.377740 4041 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:08.377790 master-0 kubenswrapper[4041]: E0320 08:35:08.377775 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.8777667 +0000 UTC m=+116.128112195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:08.378019 master-0 kubenswrapper[4041]: E0320 08:35:08.377985 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.877964035 +0000 UTC m=+116.128309640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:08.378065 master-0 kubenswrapper[4041]: I0320 08:35:08.378042 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.378093 master-0 kubenswrapper[4041]: I0320 08:35:08.378080 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.378134 master-0 kubenswrapper[4041]: I0320 08:35:08.378108 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.378332 master-0 kubenswrapper[4041]: I0320 08:35:08.378306 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.378377 master-0 kubenswrapper[4041]: I0320 08:35:08.378351 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.378377 master-0 kubenswrapper[4041]: I0320 08:35:08.378371 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.378436 master-0 kubenswrapper[4041]: I0320 08:35:08.378399 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.378844 master-0 kubenswrapper[4041]: I0320 08:35:08.378712 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.378844 master-0 kubenswrapper[4041]: I0320 08:35:08.378757 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.378844 master-0 kubenswrapper[4041]: I0320 08:35:08.378778 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.378844 master-0 kubenswrapper[4041]: I0320 08:35:08.378795 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.378844 master-0 kubenswrapper[4041]: I0320 08:35:08.378813 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.378844 master-0 kubenswrapper[4041]: I0320 08:35:08.378836 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.379034 master-0 kubenswrapper[4041]: I0320 08:35:08.378857 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.379034 master-0 kubenswrapper[4041]: I0320 08:35:08.378875 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.379034 master-0 kubenswrapper[4041]: I0320 08:35:08.378893 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.379034 master-0 kubenswrapper[4041]: I0320 08:35:08.378987 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.379152 master-0 kubenswrapper[4041]: E0320 08:35:08.379085 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:08.379182 master-0 kubenswrapper[4041]: I0320 08:35:08.379157 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.379182 master-0 kubenswrapper[4041]: E0320 08:35:08.379172 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.879150785 +0000 UTC m=+116.129496290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:08.379305 master-0 kubenswrapper[4041]: I0320 08:35:08.379280 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.379351 master-0 kubenswrapper[4041]: I0320 08:35:08.379325 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.379351 master-0 kubenswrapper[4041]: E0320 08:35:08.379337 4041 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:08.379412 master-0 kubenswrapper[4041]: I0320 08:35:08.379353 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.379412 master-0 kubenswrapper[4041]: E0320 08:35:08.379378 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.879364071 +0000 UTC m=+116.129709576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:08.379412 master-0 kubenswrapper[4041]: I0320 08:35:08.379398 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.379497 master-0 kubenswrapper[4041]: I0320 08:35:08.379426 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.379497 master-0 kubenswrapper[4041]: I0320 08:35:08.379452 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.379497 master-0 kubenswrapper[4041]: I0320 08:35:08.379474 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.379497 master-0 kubenswrapper[4041]: I0320 08:35:08.379492 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.379672 master-0 kubenswrapper[4041]: I0320 08:35:08.379510 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.379672 master-0 kubenswrapper[4041]: I0320 08:35:08.379564 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.379672 master-0 kubenswrapper[4041]: I0320 08:35:08.379582 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.379672 master-0 kubenswrapper[4041]: I0320 08:35:08.379640 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.379672 master-0 kubenswrapper[4041]: I0320 08:35:08.379659 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.379914 master-0 kubenswrapper[4041]: I0320 08:35:08.379893 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.380236 master-0 kubenswrapper[4041]: I0320 08:35:08.380198 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.380581 master-0 kubenswrapper[4041]: I0320 08:35:08.380558 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.380676 master-0 kubenswrapper[4041]: I0320 08:35:08.380646 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.381113 master-0 kubenswrapper[4041]: I0320 08:35:08.381059 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.381391 master-0 kubenswrapper[4041]: E0320 08:35:08.381361 4041 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:08.381459 master-0 kubenswrapper[4041]: E0320 08:35:08.381422 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.881403724 +0000 UTC m=+116.131749309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:08.381513 master-0 kubenswrapper[4041]: E0320 08:35:08.381468 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:08.381513 master-0 kubenswrapper[4041]: E0320 08:35:08.381497 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.881488436 +0000 UTC m=+116.131834031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:08.381601 master-0 kubenswrapper[4041]: I0320 08:35:08.381537 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.381843 master-0 kubenswrapper[4041]: I0320 08:35:08.381818 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.381912 master-0 kubenswrapper[4041]: E0320 08:35:08.381888 4041 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:08.381949 master-0 kubenswrapper[4041]: E0320 08:35:08.381920 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.881909986 +0000 UTC m=+116.132255491 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:08.382221 master-0 kubenswrapper[4041]: I0320 08:35:08.382192 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.382302 master-0 kubenswrapper[4041]: E0320 08:35:08.382282 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:08.382339 master-0 kubenswrapper[4041]: E0320 08:35:08.382320 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:08.882310068 +0000 UTC m=+116.132655663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:08.382505 master-0 kubenswrapper[4041]: I0320 08:35:08.382483 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.382737 master-0 kubenswrapper[4041]: I0320 08:35:08.382709 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.382919 master-0 kubenswrapper[4041]: I0320 08:35:08.382883 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.383560 master-0 kubenswrapper[4041]: I0320 08:35:08.383533 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.383720 master-0 kubenswrapper[4041]: I0320 08:35:08.383700 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.386799 master-0 kubenswrapper[4041]: I0320 08:35:08.386767 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.387138 master-0 kubenswrapper[4041]: I0320 08:35:08.387115 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.387138 master-0 kubenswrapper[4041]: I0320 08:35:08.387130 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.387207 master-0 kubenswrapper[4041]: I0320 08:35:08.387133 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.387207 master-0 kubenswrapper[4041]: I0320 08:35:08.387133 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.387207 master-0 kubenswrapper[4041]: I0320 08:35:08.387172 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.387354 master-0 kubenswrapper[4041]: I0320 08:35:08.387251 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.387507 master-0 kubenswrapper[4041]: I0320 08:35:08.387482 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.392149 master-0 kubenswrapper[4041]: I0320 08:35:08.392113 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.392667 master-0 kubenswrapper[4041]: I0320 08:35:08.392627 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.435818 master-0 kubenswrapper[4041]: I0320 08:35:08.435742 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.446956 master-0 kubenswrapper[4041]: I0320 08:35:08.446911 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:08.497101 master-0 kubenswrapper[4041]: I0320 08:35:08.496994 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:08.535455 master-0 kubenswrapper[4041]: I0320 08:35:08.534712 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:08.576440 master-0 kubenswrapper[4041]: I0320 08:35:08.576012 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:08.598700 master-0 kubenswrapper[4041]: I0320 08:35:08.598648 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.604876 master-0 kubenswrapper[4041]: I0320 08:35:08.604813 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:08.605007 master-0 kubenswrapper[4041]: I0320 08:35:08.604887 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.605007 master-0 kubenswrapper[4041]: I0320 08:35:08.604995 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.606995 master-0 kubenswrapper[4041]: I0320 08:35:08.606948 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.608967 master-0 kubenswrapper[4041]: I0320 08:35:08.608905 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.611906 master-0 kubenswrapper[4041]: I0320 08:35:08.611852 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.621931 master-0 kubenswrapper[4041]: I0320 08:35:08.621878 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.624334 master-0 kubenswrapper[4041]: I0320 08:35:08.624240 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.643639 master-0 kubenswrapper[4041]: I0320 08:35:08.642924 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:08.645447 master-0 kubenswrapper[4041]: I0320 08:35:08.645228 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.662933 master-0 kubenswrapper[4041]: I0320 08:35:08.662860 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.668352 master-0 kubenswrapper[4041]: I0320 08:35:08.666258 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:08.675445 master-0 kubenswrapper[4041]: I0320 08:35:08.675392 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.695338 master-0 kubenswrapper[4041]: I0320 08:35:08.693933 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.713763 master-0 kubenswrapper[4041]: I0320 08:35:08.711945 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.720521 master-0 kubenswrapper[4041]: I0320 08:35:08.717547 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx"] Mar 20 08:35:08.720521 master-0 kubenswrapper[4041]: I0320 08:35:08.719306 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd"] Mar 20 08:35:08.739291 master-0 kubenswrapper[4041]: I0320 08:35:08.732572 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.739291 master-0 kubenswrapper[4041]: I0320 08:35:08.732735 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:08.745615 master-0 kubenswrapper[4041]: W0320 08:35:08.742541 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09a5682c_4f13_4b8c_8179_3e6dfa8f98db.slice/crio-7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3 WatchSource:0}: Error finding container 7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3: Status 404 returned error can't find the container with id 7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3 Mar 20 08:35:08.753569 master-0 kubenswrapper[4041]: W0320 08:35:08.751604 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2faf85a2_29bb_4275_a12b_0ef1663a4f0d.slice/crio-b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3 WatchSource:0}: Error finding container b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3: Status 404 returned error can't find the container with id b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3 Mar 20 08:35:08.753569 master-0 kubenswrapper[4041]: I0320 08:35:08.752445 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:08.755904 master-0 kubenswrapper[4041]: I0320 08:35:08.755435 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.767246 master-0 kubenswrapper[4041]: I0320 08:35:08.767207 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:08.784058 master-0 kubenswrapper[4041]: I0320 08:35:08.784017 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:08.787734 master-0 kubenswrapper[4041]: I0320 08:35:08.787712 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:08.788001 master-0 kubenswrapper[4041]: I0320 08:35:08.787872 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:08.788001 master-0 kubenswrapper[4041]: E0320 08:35:08.787880 4041 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:08.788001 master-0 kubenswrapper[4041]: E0320 08:35:08.787965 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.787942336 +0000 UTC m=+117.038287961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:08.788123 master-0 kubenswrapper[4041]: E0320 08:35:08.788045 4041 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:08.788123 master-0 kubenswrapper[4041]: E0320 08:35:08.788113 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.7880942 +0000 UTC m=+117.038439705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:08.788199 master-0 kubenswrapper[4041]: W0320 08:35:08.788164 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb097596e_79e1_44d1_be8a_96340042a041.slice/crio-7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327 WatchSource:0}: Error finding container 7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327: Status 404 returned error can't find the container with id 7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327 Mar 20 08:35:08.803079 master-0 kubenswrapper[4041]: I0320 08:35:08.803038 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:08.816908 master-0 kubenswrapper[4041]: I0320 08:35:08.814395 4041 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.823569 master-0 kubenswrapper[4041]: I0320 08:35:08.823530 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc"] Mar 20 08:35:08.830818 master-0 kubenswrapper[4041]: W0320 08:35:08.830720 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ef691ec_d1f0_4c97_97e4_4aa7a6c0a86c.slice/crio-1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b WatchSource:0}: Error finding container 1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b: Status 404 returned error can't find the container with id 1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b Mar 20 08:35:08.837044 master-0 kubenswrapper[4041]: I0320 08:35:08.837007 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 08:35:08.850923 master-0 kubenswrapper[4041]: I0320 08:35:08.850866 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:08.863541 master-0 kubenswrapper[4041]: I0320 08:35:08.863494 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl"] Mar 20 08:35:08.879322 master-0 kubenswrapper[4041]: I0320 08:35:08.877463 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:08.886559 master-0 kubenswrapper[4041]: I0320 08:35:08.885089 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq"] Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: I0320 08:35:08.888379 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: I0320 08:35:08.888413 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: I0320 08:35:08.888436 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888526 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888535 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888568 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.888555501 +0000 UTC m=+117.138901006 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888580 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.888575561 +0000 UTC m=+117.138921066 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: I0320 08:35:08.888598 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888604 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: I0320 08:35:08.888619 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888634 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.888621372 +0000 UTC m=+117.138966877 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: I0320 08:35:08.888654 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888662 4041 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888680 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.888675014 +0000 UTC m=+117.139020519 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:08.889971 master-0 kubenswrapper[4041]: E0320 08:35:08.888793 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.888872 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.888849568 +0000 UTC m=+117.139195123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: I0320 08:35:08.888915 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.888985 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889028 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.889009722 +0000 UTC m=+117.139355287 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: I0320 08:35:08.889051 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889058 4041 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889097 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.889089154 +0000 UTC m=+117.139434779 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889112 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889151 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.889140656 +0000 UTC m=+117.139486231 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: I0320 08:35:08.889172 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: I0320 08:35:08.889209 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: I0320 08:35:08.889242 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889311 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889365 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.889355262 +0000 UTC m=+117.139700817 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889411 4041 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:08.895086 master-0 kubenswrapper[4041]: E0320 08:35:08.889416 4041 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:08.898865 master-0 kubenswrapper[4041]: E0320 08:35:08.889436 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.889429714 +0000 UTC m=+117.139775219 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:08.898865 master-0 kubenswrapper[4041]: E0320 08:35:08.889451 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:09.889442824 +0000 UTC m=+117.139788329 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:08.951337 master-0 kubenswrapper[4041]: I0320 08:35:08.949282 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:08.953125 master-0 kubenswrapper[4041]: I0320 08:35:08.953074 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt"] Mar 20 08:35:08.972359 master-0 kubenswrapper[4041]: W0320 08:35:08.971629 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65157a9b_3df7_4cc1_a85a_a5dfa59921ad.slice/crio-c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e WatchSource:0}: Error finding container c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e: Status 404 returned error can't find the container with id c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e Mar 20 08:35:08.987100 master-0 kubenswrapper[4041]: I0320 08:35:08.987063 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs"] Mar 20 08:35:08.998514 master-0 kubenswrapper[4041]: W0320 08:35:08.994692 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20ff930f_ec0d_40ed_a879_1546691f685d.slice/crio-c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d WatchSource:0}: Error finding container c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d: Status 404 returned error can't find the container with id c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d Mar 20 08:35:09.028510 master-0 kubenswrapper[4041]: I0320 08:35:09.026478 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:09.041188 master-0 kubenswrapper[4041]: I0320 08:35:09.041130 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:09.070586 master-0 kubenswrapper[4041]: I0320 08:35:09.069832 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp"] Mar 20 08:35:09.073509 master-0 kubenswrapper[4041]: W0320 08:35:09.073476 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3065e4b4_4493_41ce_b9d2_89315475f74f.slice/crio-f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a WatchSource:0}: Error finding container f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a: Status 404 returned error can't find the container with id f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a Mar 20 08:35:09.093645 master-0 kubenswrapper[4041]: I0320 08:35:09.093612 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6"] Mar 20 08:35:09.160910 master-0 kubenswrapper[4041]: I0320 08:35:09.160870 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq"] Mar 20 08:35:09.176872 master-0 kubenswrapper[4041]: W0320 08:35:09.176180 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfec3170d_3f3e_42f5_b20a_da53721c0dac.slice/crio-9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916 WatchSource:0}: Error finding container 9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916: Status 404 returned error can't find the container with id 9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916 Mar 20 08:35:09.218299 master-0 kubenswrapper[4041]: I0320 08:35:09.218240 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm"] Mar 20 08:35:09.234493 master-0 kubenswrapper[4041]: I0320 08:35:09.234452 4041 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24"] Mar 20 08:35:09.240289 master-0 kubenswrapper[4041]: W0320 08:35:09.240241 4041 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61ab4d32_c732_4be5_aa85_a2e1dd21cb60.slice/crio-23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8 WatchSource:0}: Error finding container 23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8: Status 404 returned error can't find the container with id 23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8 Mar 20 08:35:09.248325 master-0 kubenswrapper[4041]: I0320 08:35:09.248288 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerStarted","Data":"0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1"} Mar 20 08:35:09.249526 master-0 kubenswrapper[4041]: I0320 08:35:09.249475 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerStarted","Data":"36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25"} Mar 20 08:35:09.250743 master-0 kubenswrapper[4041]: I0320 08:35:09.250639 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerStarted","Data":"ae08cd7d4b99291a81168cf2f99395c5e971d107dc0502f7bea648e012bdeade"} Mar 20 08:35:09.250743 master-0 kubenswrapper[4041]: I0320 08:35:09.250688 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerStarted","Data":"b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3"} Mar 20 08:35:09.251470 master-0 kubenswrapper[4041]: I0320 08:35:09.251448 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerStarted","Data":"23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8"} Mar 20 08:35:09.252440 master-0 kubenswrapper[4041]: I0320 08:35:09.252419 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerStarted","Data":"1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b"} Mar 20 08:35:09.253365 master-0 kubenswrapper[4041]: I0320 08:35:09.253326 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a"} Mar 20 08:35:09.254248 master-0 kubenswrapper[4041]: I0320 08:35:09.254216 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9xlf2" event={"ID":"b097596e-79e1-44d1-be8a-96340042a041","Type":"ContainerStarted","Data":"7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327"} Mar 20 08:35:09.255435 master-0 kubenswrapper[4041]: I0320 08:35:09.255402 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerStarted","Data":"4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980"} Mar 20 08:35:09.256378 master-0 kubenswrapper[4041]: I0320 08:35:09.256355 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerStarted","Data":"c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e"} Mar 20 08:35:09.257430 master-0 kubenswrapper[4041]: I0320 08:35:09.257398 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerStarted","Data":"c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d"} Mar 20 08:35:09.258416 master-0 kubenswrapper[4041]: I0320 08:35:09.258389 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerStarted","Data":"9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916"} Mar 20 08:35:09.260342 master-0 kubenswrapper[4041]: I0320 08:35:09.259428 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerStarted","Data":"a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750"} Mar 20 08:35:09.260342 master-0 kubenswrapper[4041]: I0320 08:35:09.260312 4041 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerStarted","Data":"7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3"} Mar 20 08:35:09.336511 master-0 kubenswrapper[4041]: I0320 08:35:09.336455 4041 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" podStartSLOduration=80.336441324 podStartE2EDuration="1m20.336441324s" podCreationTimestamp="2026-03-20 08:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:09.336001843 +0000 UTC m=+116.586347368" watchObservedRunningTime="2026-03-20 08:35:09.336441324 +0000 UTC m=+116.586786829" Mar 20 08:35:09.534787 master-0 kubenswrapper[4041]: I0320 08:35:09.534636 4041 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:09.536928 master-0 kubenswrapper[4041]: I0320 08:35:09.536901 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 08:35:09.537725 master-0 kubenswrapper[4041]: I0320 08:35:09.537065 4041 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 08:35:09.798906 master-0 kubenswrapper[4041]: I0320 08:35:09.798796 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:09.798906 master-0 kubenswrapper[4041]: I0320 08:35:09.798855 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:09.799190 master-0 kubenswrapper[4041]: E0320 08:35:09.799008 4041 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:09.799190 master-0 kubenswrapper[4041]: E0320 08:35:09.799084 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.799066008 +0000 UTC m=+119.049411513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:09.799336 master-0 kubenswrapper[4041]: E0320 08:35:09.799317 4041 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:09.799383 master-0 kubenswrapper[4041]: E0320 08:35:09.799372 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.799358166 +0000 UTC m=+119.049703671 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:09.900845 master-0 kubenswrapper[4041]: I0320 08:35:09.900769 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:09.900845 master-0 kubenswrapper[4041]: I0320 08:35:09.900815 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:09.900845 master-0 kubenswrapper[4041]: I0320 08:35:09.900834 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:09.901317 master-0 kubenswrapper[4041]: E0320 08:35:09.900993 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:09.901317 master-0 kubenswrapper[4041]: I0320 08:35:09.901049 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:09.901317 master-0 kubenswrapper[4041]: E0320 08:35:09.901074 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.901052309 +0000 UTC m=+119.151397804 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:09.901317 master-0 kubenswrapper[4041]: E0320 08:35:09.901226 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:09.901317 master-0 kubenswrapper[4041]: E0320 08:35:09.901324 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.901299975 +0000 UTC m=+119.151645480 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:09.901529 master-0 kubenswrapper[4041]: E0320 08:35:09.901357 4041 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:09.901529 master-0 kubenswrapper[4041]: E0320 08:35:09.901411 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.901394067 +0000 UTC m=+119.151739572 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:09.901529 master-0 kubenswrapper[4041]: I0320 08:35:09.901443 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:09.901529 master-0 kubenswrapper[4041]: E0320 08:35:09.901485 4041 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:09.901529 master-0 kubenswrapper[4041]: E0320 08:35:09.901508 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.90150131 +0000 UTC m=+119.151846815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:09.901668 master-0 kubenswrapper[4041]: E0320 08:35:09.901563 4041 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:09.901668 master-0 kubenswrapper[4041]: E0320 08:35:09.901615 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.901599723 +0000 UTC m=+119.151945228 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:09.901668 master-0 kubenswrapper[4041]: I0320 08:35:09.901659 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:09.901783 master-0 kubenswrapper[4041]: I0320 08:35:09.901681 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:09.901783 master-0 kubenswrapper[4041]: E0320 08:35:09.901733 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:09.901783 master-0 kubenswrapper[4041]: E0320 08:35:09.901753 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.901747026 +0000 UTC m=+119.152092611 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:09.901870 master-0 kubenswrapper[4041]: E0320 08:35:09.901804 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:09.901870 master-0 kubenswrapper[4041]: E0320 08:35:09.901846 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.901835179 +0000 UTC m=+119.152180734 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:09.901930 master-0 kubenswrapper[4041]: I0320 08:35:09.901881 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: I0320 08:35:09.901982 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: E0320 08:35:09.901993 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: I0320 08:35:09.902029 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: E0320 08:35:09.902036 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: E0320 08:35:09.902053 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.902044154 +0000 UTC m=+119.152389659 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: I0320 08:35:09.902103 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:09.902132 master-0 kubenswrapper[4041]: E0320 08:35:09.902121 4041 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:09.902360 master-0 kubenswrapper[4041]: E0320 08:35:09.902165 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.902153607 +0000 UTC m=+119.152499122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:09.902360 master-0 kubenswrapper[4041]: E0320 08:35:09.902173 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:09.902360 master-0 kubenswrapper[4041]: E0320 08:35:09.902199 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.902190688 +0000 UTC m=+119.152536193 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:09.902360 master-0 kubenswrapper[4041]: E0320 08:35:09.902213 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:11.902207548 +0000 UTC m=+119.152553053 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:11.821909 master-0 kubenswrapper[4041]: I0320 08:35:11.821842 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:11.822710 master-0 kubenswrapper[4041]: E0320 08:35:11.822030 4041 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:11.822710 master-0 kubenswrapper[4041]: I0320 08:35:11.822053 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:11.822710 master-0 kubenswrapper[4041]: E0320 08:35:11.822112 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.822091032 +0000 UTC m=+123.072436617 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:11.822710 master-0 kubenswrapper[4041]: E0320 08:35:11.822231 4041 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:11.822710 master-0 kubenswrapper[4041]: E0320 08:35:11.822311 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.822291097 +0000 UTC m=+123.072636682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:11.923042 master-0 kubenswrapper[4041]: I0320 08:35:11.922984 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:11.923224 master-0 kubenswrapper[4041]: I0320 08:35:11.923090 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:11.923224 master-0 kubenswrapper[4041]: E0320 08:35:11.923138 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:11.923300 master-0 kubenswrapper[4041]: E0320 08:35:11.923236 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923213341 +0000 UTC m=+123.173558886 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:11.923336 master-0 kubenswrapper[4041]: E0320 08:35:11.923284 4041 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:11.923336 master-0 kubenswrapper[4041]: I0320 08:35:11.923319 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:11.923405 master-0 kubenswrapper[4041]: I0320 08:35:11.923351 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:11.923405 master-0 kubenswrapper[4041]: E0320 08:35:11.923366 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923347604 +0000 UTC m=+123.173693109 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:11.923405 master-0 kubenswrapper[4041]: E0320 08:35:11.923396 4041 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:11.923405 master-0 kubenswrapper[4041]: I0320 08:35:11.923396 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:11.923505 master-0 kubenswrapper[4041]: E0320 08:35:11.923425 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923416236 +0000 UTC m=+123.173761741 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:11.923505 master-0 kubenswrapper[4041]: I0320 08:35:11.923441 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:11.923505 master-0 kubenswrapper[4041]: E0320 08:35:11.923451 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:11.923505 master-0 kubenswrapper[4041]: I0320 08:35:11.923464 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:11.923505 master-0 kubenswrapper[4041]: E0320 08:35:11.923474 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923467657 +0000 UTC m=+123.173813272 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:11.923505 master-0 kubenswrapper[4041]: I0320 08:35:11.923486 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923517 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: I0320 08:35:11.923529 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923537 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923531559 +0000 UTC m=+123.173877064 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: I0320 08:35:11.923548 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: I0320 08:35:11.923567 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923570 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923591 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.92358466 +0000 UTC m=+123.173930165 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923638 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923641 4041 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923660 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923653912 +0000 UTC m=+123.173999417 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923673 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923666602 +0000 UTC m=+123.174012107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:11.923678 master-0 kubenswrapper[4041]: E0320 08:35:11.923696 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:11.924118 master-0 kubenswrapper[4041]: E0320 08:35:11.923714 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923708973 +0000 UTC m=+123.174054478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:11.924118 master-0 kubenswrapper[4041]: E0320 08:35:11.923873 4041 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:11.924118 master-0 kubenswrapper[4041]: E0320 08:35:11.923877 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:11.924118 master-0 kubenswrapper[4041]: E0320 08:35:11.923901 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923894388 +0000 UTC m=+123.174239893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:11.924118 master-0 kubenswrapper[4041]: E0320 08:35:11.923932 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:15.923914338 +0000 UTC m=+123.174259883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:15.865211 master-0 kubenswrapper[4041]: I0320 08:35:15.864778 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:15.865211 master-0 kubenswrapper[4041]: I0320 08:35:15.865228 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:15.866649 master-0 kubenswrapper[4041]: E0320 08:35:15.865066 4041 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:15.866649 master-0 kubenswrapper[4041]: E0320 08:35:15.865418 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.86539071 +0000 UTC m=+131.115736255 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:15.866649 master-0 kubenswrapper[4041]: E0320 08:35:15.865448 4041 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:15.866649 master-0 kubenswrapper[4041]: E0320 08:35:15.865486 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.865474762 +0000 UTC m=+131.115820287 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:15.966054 master-0 kubenswrapper[4041]: I0320 08:35:15.965952 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:15.966054 master-0 kubenswrapper[4041]: I0320 08:35:15.966040 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: I0320 08:35:15.966144 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: E0320 08:35:15.966177 4041 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: E0320 08:35:15.966256 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.96623361 +0000 UTC m=+131.216579125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: E0320 08:35:15.966257 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: I0320 08:35:15.966180 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: E0320 08:35:15.966336 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.966320892 +0000 UTC m=+131.216666417 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: E0320 08:35:15.966475 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:15.966540 master-0 kubenswrapper[4041]: I0320 08:35:15.966491 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966583 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.966552638 +0000 UTC m=+131.216898213 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966638 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966704 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.966680542 +0000 UTC m=+131.217026087 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966738 4041 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966830 4041 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: I0320 08:35:15.966746 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966886 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.966844536 +0000 UTC m=+131.217190081 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: I0320 08:35:15.966957 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.966985 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.966963509 +0000 UTC m=+131.217309064 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: I0320 08:35:15.967029 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.967057 4041 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: I0320 08:35:15.967090 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.967099 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.967084912 +0000 UTC m=+131.217430447 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.967201 4041 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:15.967303 master-0 kubenswrapper[4041]: E0320 08:35:15.967231 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: E0320 08:35:15.967251 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.967232216 +0000 UTC m=+131.217577771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: E0320 08:35:15.967375 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.967349869 +0000 UTC m=+131.217695424 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: I0320 08:35:15.967451 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: I0320 08:35:15.967530 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: E0320 08:35:15.967563 4041 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: E0320 08:35:15.967631 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.967617545 +0000 UTC m=+131.217963070 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: E0320 08:35:15.967729 4041 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:15.968503 master-0 kubenswrapper[4041]: E0320 08:35:15.967815 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.96779379 +0000 UTC m=+131.218139335 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:16.473191 master-0 kubenswrapper[4041]: I0320 08:35:16.473077 4041 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:16.475817 master-0 kubenswrapper[4041]: I0320 08:35:16.475774 4041 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 08:35:16.484093 master-0 kubenswrapper[4041]: E0320 08:35:16.484041 4041 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:16.484234 master-0 kubenswrapper[4041]: E0320 08:35:16.484136 4041 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:36:20.484108255 +0000 UTC m=+187.734453780 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:18.888485 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 20 08:35:18.918107 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 08:35:18.918600 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 20 08:35:18.923413 master-0 systemd[1]: kubelet.service: Consumed 9.888s CPU time. Mar 20 08:35:18.944378 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 20 08:35:19.077016 master-0 kubenswrapper[7476]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:35:19.077016 master-0 kubenswrapper[7476]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 08:35:19.077016 master-0 kubenswrapper[7476]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:35:19.077016 master-0 kubenswrapper[7476]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:35:19.077016 master-0 kubenswrapper[7476]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 08:35:19.077016 master-0 kubenswrapper[7476]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:35:19.078143 master-0 kubenswrapper[7476]: I0320 08:35:19.077133 7476 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 08:35:19.080306 master-0 kubenswrapper[7476]: W0320 08:35:19.080282 7476 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:35:19.080306 master-0 kubenswrapper[7476]: W0320 08:35:19.080304 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080313 7476 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080320 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080327 7476 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080334 7476 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080340 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080345 7476 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080352 7476 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080359 7476 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080366 7476 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080372 7476 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080377 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080382 7476 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080387 7476 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080392 7476 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080399 7476 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:35:19.080389 master-0 kubenswrapper[7476]: W0320 08:35:19.080404 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080411 7476 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080418 7476 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080424 7476 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080431 7476 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080437 7476 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080443 7476 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080448 7476 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080453 7476 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080459 7476 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080464 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080469 7476 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080474 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080479 7476 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080484 7476 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080489 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080494 7476 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080499 7476 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080504 7476 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:35:19.080957 master-0 kubenswrapper[7476]: W0320 08:35:19.080508 7476 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080514 7476 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080519 7476 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080524 7476 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080529 7476 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080533 7476 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080539 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080544 7476 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080549 7476 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080553 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080558 7476 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080563 7476 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080568 7476 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080574 7476 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080579 7476 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080586 7476 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080591 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080596 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080601 7476 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080608 7476 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:35:19.081426 master-0 kubenswrapper[7476]: W0320 08:35:19.080614 7476 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080619 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080624 7476 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080629 7476 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080634 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080638 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080643 7476 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080648 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080654 7476 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080659 7476 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080664 7476 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080669 7476 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080674 7476 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080679 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080684 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: W0320 08:35:19.080689 7476 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: I0320 08:35:19.080992 7476 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: I0320 08:35:19.081003 7476 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: I0320 08:35:19.081011 7476 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: I0320 08:35:19.081018 7476 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: I0320 08:35:19.081025 7476 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 08:35:19.082074 master-0 kubenswrapper[7476]: I0320 08:35:19.081032 7476 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081040 7476 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081047 7476 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081053 7476 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081064 7476 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081071 7476 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081077 7476 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081083 7476 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081089 7476 flags.go:64] FLAG: --cgroup-root="" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081094 7476 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081100 7476 flags.go:64] FLAG: --client-ca-file="" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081106 7476 flags.go:64] FLAG: --cloud-config="" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081111 7476 flags.go:64] FLAG: --cloud-provider="" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081117 7476 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081126 7476 flags.go:64] FLAG: --cluster-domain="" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081131 7476 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081138 7476 flags.go:64] FLAG: --config-dir="" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081144 7476 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081150 7476 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081157 7476 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081163 7476 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081169 7476 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081175 7476 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081181 7476 flags.go:64] FLAG: --contention-profiling="false" Mar 20 08:35:19.082574 master-0 kubenswrapper[7476]: I0320 08:35:19.081187 7476 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081192 7476 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081199 7476 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081204 7476 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081212 7476 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081218 7476 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081223 7476 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081229 7476 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081234 7476 flags.go:64] FLAG: --enable-server="true" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081240 7476 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081248 7476 flags.go:64] FLAG: --event-burst="100" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081254 7476 flags.go:64] FLAG: --event-qps="50" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081283 7476 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081291 7476 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081299 7476 flags.go:64] FLAG: --eviction-hard="" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081328 7476 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081335 7476 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081342 7476 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081350 7476 flags.go:64] FLAG: --eviction-soft="" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081358 7476 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081365 7476 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081373 7476 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081380 7476 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081387 7476 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081393 7476 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 08:35:19.083175 master-0 kubenswrapper[7476]: I0320 08:35:19.081400 7476 flags.go:64] FLAG: --feature-gates="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081407 7476 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081413 7476 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081419 7476 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081425 7476 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081431 7476 flags.go:64] FLAG: --healthz-port="10248" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081437 7476 flags.go:64] FLAG: --help="false" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081443 7476 flags.go:64] FLAG: --hostname-override="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081449 7476 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081455 7476 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081461 7476 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081466 7476 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081472 7476 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081477 7476 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081483 7476 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081489 7476 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081495 7476 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081500 7476 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081506 7476 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081515 7476 flags.go:64] FLAG: --kube-reserved="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081521 7476 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081527 7476 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081532 7476 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081538 7476 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081543 7476 flags.go:64] FLAG: --lock-file="" Mar 20 08:35:19.083882 master-0 kubenswrapper[7476]: I0320 08:35:19.081548 7476 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081554 7476 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081560 7476 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081568 7476 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081574 7476 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081579 7476 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081585 7476 flags.go:64] FLAG: --logging-format="text" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081591 7476 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081597 7476 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081602 7476 flags.go:64] FLAG: --manifest-url="" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081608 7476 flags.go:64] FLAG: --manifest-url-header="" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081616 7476 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081622 7476 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081628 7476 flags.go:64] FLAG: --max-pods="110" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081634 7476 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081640 7476 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081646 7476 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081652 7476 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081658 7476 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081664 7476 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081670 7476 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081683 7476 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081689 7476 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081696 7476 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 08:35:19.084457 master-0 kubenswrapper[7476]: I0320 08:35:19.081702 7476 flags.go:64] FLAG: --pod-cidr="" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081707 7476 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081717 7476 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081725 7476 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081731 7476 flags.go:64] FLAG: --pods-per-core="0" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081736 7476 flags.go:64] FLAG: --port="10250" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081742 7476 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081748 7476 flags.go:64] FLAG: --provider-id="" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081753 7476 flags.go:64] FLAG: --qos-reserved="" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081759 7476 flags.go:64] FLAG: --read-only-port="10255" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081764 7476 flags.go:64] FLAG: --register-node="true" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081770 7476 flags.go:64] FLAG: --register-schedulable="true" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081778 7476 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081982 7476 flags.go:64] FLAG: --registry-burst="10" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081988 7476 flags.go:64] FLAG: --registry-qps="5" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.081994 7476 flags.go:64] FLAG: --reserved-cpus="" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082000 7476 flags.go:64] FLAG: --reserved-memory="" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082006 7476 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082012 7476 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082017 7476 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082023 7476 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082029 7476 flags.go:64] FLAG: --runonce="false" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082034 7476 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082041 7476 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082047 7476 flags.go:64] FLAG: --seccomp-default="false" Mar 20 08:35:19.085002 master-0 kubenswrapper[7476]: I0320 08:35:19.082052 7476 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082058 7476 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082064 7476 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082069 7476 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082075 7476 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082081 7476 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082087 7476 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082092 7476 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082098 7476 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082105 7476 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082113 7476 flags.go:64] FLAG: --system-cgroups="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082119 7476 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082128 7476 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082134 7476 flags.go:64] FLAG: --tls-cert-file="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082140 7476 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082147 7476 flags.go:64] FLAG: --tls-min-version="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082153 7476 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082158 7476 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082164 7476 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082171 7476 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082177 7476 flags.go:64] FLAG: --v="2" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082184 7476 flags.go:64] FLAG: --version="false" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082191 7476 flags.go:64] FLAG: --vmodule="" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082197 7476 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: I0320 08:35:19.082204 7476 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 08:35:19.085612 master-0 kubenswrapper[7476]: W0320 08:35:19.082360 7476 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082367 7476 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082373 7476 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082378 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082383 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082389 7476 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082394 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082401 7476 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082409 7476 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082415 7476 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082421 7476 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082427 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082432 7476 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082437 7476 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082442 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082448 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082453 7476 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082460 7476 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082464 7476 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082469 7476 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:35:19.086168 master-0 kubenswrapper[7476]: W0320 08:35:19.082472 7476 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082476 7476 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082479 7476 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082483 7476 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082487 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082490 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082496 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082500 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082503 7476 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082507 7476 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082511 7476 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082516 7476 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082521 7476 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082525 7476 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082529 7476 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082533 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082538 7476 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082542 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082546 7476 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:35:19.086654 master-0 kubenswrapper[7476]: W0320 08:35:19.082550 7476 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082553 7476 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082557 7476 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082561 7476 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082564 7476 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082568 7476 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082572 7476 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082576 7476 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082580 7476 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082584 7476 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082592 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082596 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082601 7476 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082606 7476 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082610 7476 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082614 7476 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082618 7476 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082621 7476 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082626 7476 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082631 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:35:19.087080 master-0 kubenswrapper[7476]: W0320 08:35:19.082635 7476 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082638 7476 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082642 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082646 7476 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082649 7476 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082653 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082656 7476 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082660 7476 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082664 7476 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082667 7476 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082671 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082674 7476 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: W0320 08:35:19.082678 7476 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:35:19.087556 master-0 kubenswrapper[7476]: I0320 08:35:19.082690 7476 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:35:19.091934 master-0 kubenswrapper[7476]: I0320 08:35:19.091881 7476 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 20 08:35:19.091934 master-0 kubenswrapper[7476]: I0320 08:35:19.091932 7476 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 08:35:19.092076 master-0 kubenswrapper[7476]: W0320 08:35:19.092059 7476 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:35:19.092112 master-0 kubenswrapper[7476]: W0320 08:35:19.092091 7476 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:35:19.092112 master-0 kubenswrapper[7476]: W0320 08:35:19.092098 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:35:19.092112 master-0 kubenswrapper[7476]: W0320 08:35:19.092102 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:35:19.092112 master-0 kubenswrapper[7476]: W0320 08:35:19.092107 7476 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:35:19.092112 master-0 kubenswrapper[7476]: W0320 08:35:19.092111 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:35:19.092112 master-0 kubenswrapper[7476]: W0320 08:35:19.092115 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092119 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092124 7476 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092127 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092131 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092135 7476 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092138 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092142 7476 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092146 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092167 7476 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092171 7476 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092176 7476 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092179 7476 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092183 7476 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092187 7476 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092191 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092194 7476 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092198 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092202 7476 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092205 7476 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:35:19.092255 master-0 kubenswrapper[7476]: W0320 08:35:19.092209 7476 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092213 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092217 7476 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092220 7476 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092225 7476 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092246 7476 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092252 7476 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092256 7476 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092281 7476 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092286 7476 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092291 7476 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092295 7476 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092299 7476 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092303 7476 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092307 7476 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092310 7476 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092315 7476 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092320 7476 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092324 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:35:19.092718 master-0 kubenswrapper[7476]: W0320 08:35:19.092328 7476 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092332 7476 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092336 7476 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092340 7476 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092343 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092347 7476 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092351 7476 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092356 7476 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092361 7476 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092364 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092368 7476 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092372 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092375 7476 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092381 7476 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092385 7476 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092389 7476 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092394 7476 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092397 7476 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092401 7476 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092405 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:35:19.093367 master-0 kubenswrapper[7476]: W0320 08:35:19.092409 7476 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.092412 7476 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.092417 7476 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.092421 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.092426 7476 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.092430 7476 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.092435 7476 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: I0320 08:35:19.092442 7476 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093653 7476 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093794 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093830 7476 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093841 7476 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093852 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093863 7476 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:35:19.093881 master-0 kubenswrapper[7476]: W0320 08:35:19.093872 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.093882 7476 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.093891 7476 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.093901 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.093910 7476 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.093919 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.093972 7476 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094015 7476 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094035 7476 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094045 7476 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094055 7476 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094064 7476 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094074 7476 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094083 7476 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094091 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094101 7476 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094110 7476 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094119 7476 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094128 7476 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094137 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:35:19.094208 master-0 kubenswrapper[7476]: W0320 08:35:19.094154 7476 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094162 7476 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094171 7476 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094180 7476 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094189 7476 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094198 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094208 7476 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094217 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094226 7476 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094235 7476 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094245 7476 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094311 7476 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094328 7476 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094337 7476 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094345 7476 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094358 7476 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094370 7476 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094380 7476 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094390 7476 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:35:19.094689 master-0 kubenswrapper[7476]: W0320 08:35:19.094402 7476 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094413 7476 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094422 7476 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094432 7476 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094442 7476 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094456 7476 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094465 7476 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094475 7476 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094484 7476 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094493 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094502 7476 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094511 7476 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094520 7476 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094529 7476 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094538 7476 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094546 7476 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094555 7476 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094564 7476 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094579 7476 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094588 7476 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:35:19.095198 master-0 kubenswrapper[7476]: W0320 08:35:19.094597 7476 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: W0320 08:35:19.094606 7476 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: W0320 08:35:19.094615 7476 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: W0320 08:35:19.094624 7476 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: W0320 08:35:19.094632 7476 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: W0320 08:35:19.094641 7476 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: W0320 08:35:19.094650 7476 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: I0320 08:35:19.094663 7476 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:35:19.095762 master-0 kubenswrapper[7476]: I0320 08:35:19.095397 7476 server.go:940] "Client rotation is on, will bootstrap in background" Mar 20 08:35:19.100275 master-0 kubenswrapper[7476]: I0320 08:35:19.100234 7476 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 20 08:35:19.100411 master-0 kubenswrapper[7476]: I0320 08:35:19.100386 7476 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 08:35:19.100986 master-0 kubenswrapper[7476]: I0320 08:35:19.100961 7476 server.go:997] "Starting client certificate rotation" Mar 20 08:35:19.101031 master-0 kubenswrapper[7476]: I0320 08:35:19.100989 7476 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 20 08:35:19.101531 master-0 kubenswrapper[7476]: I0320 08:35:19.101437 7476 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 06:00:09.408676652 +0000 UTC Mar 20 08:35:19.101568 master-0 kubenswrapper[7476]: I0320 08:35:19.101529 7476 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 21h24m50.307150577s for next certificate rotation Mar 20 08:35:19.104348 master-0 kubenswrapper[7476]: I0320 08:35:19.104329 7476 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:35:19.106111 master-0 kubenswrapper[7476]: I0320 08:35:19.106093 7476 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:35:19.110301 master-0 kubenswrapper[7476]: I0320 08:35:19.110282 7476 log.go:25] "Validated CRI v1 runtime API" Mar 20 08:35:19.113790 master-0 kubenswrapper[7476]: I0320 08:35:19.113757 7476 log.go:25] "Validated CRI v1 image API" Mar 20 08:35:19.114956 master-0 kubenswrapper[7476]: I0320 08:35:19.114930 7476 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 08:35:19.120701 master-0 kubenswrapper[7476]: I0320 08:35:19.120650 7476 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 8bd1c714-85b3-42d8-843c-32eb4beee773:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 20 08:35:19.121199 master-0 kubenswrapper[7476]: I0320 08:35:19.120699 7476 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b/userdata/shm major:0 minor:235 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8/userdata/shm major:0 minor:288 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980/userdata/shm major:0 minor:296 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3/userdata/shm major:0 minor:233 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~projected/kube-api-access-b67hn:{mountpoint:/var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~projected/kube-api-access-b67hn major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~projected/kube-api-access-8xv94:{mountpoint:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~projected/kube-api-access-8xv94 major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~projected/kube-api-access-rdsv9:{mountpoint:/var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~projected/kube-api-access-rdsv9 major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~projected/kube-api-access-d5v7l:{mountpoint:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~projected/kube-api-access-d5v7l major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~projected/kube-api-access-w4sfm:{mountpoint:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~projected/kube-api-access-w4sfm major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/kube-api-access-8qqcw:{mountpoint:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/kube-api-access-8qqcw major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22ff82cf-0d7d-4955-9b7c-97757acbc021/volumes/kubernetes.io~projected/kube-api-access-sglvd:{mountpoint:/var/lib/kubelet/pods/22ff82cf-0d7d-4955-9b7c-97757acbc021/volumes/kubernetes.io~projected/kube-api-access-sglvd major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~projected/kube-api-access-q7gdm:{mountpoint:/var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~projected/kube-api-access-q7gdm major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~projected/kube-api-access-rgl8m:{mountpoint:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~projected/kube-api-access-rgl8m major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~projected/kube-api-access-wpr8b:{mountpoint:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~projected/kube-api-access-wpr8b major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3776fdb6-25a1-4e3d-bdd1-437c69af3a55/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/3776fdb6-25a1-4e3d-bdd1-437c69af3a55/volumes/kubernetes.io~projected/kube-api-access major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~projected/kube-api-access-2dkgv:{mountpoint:/var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~projected/kube-api-access-2dkgv major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/kube-api-access-5r8zt:{mountpoint:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/kube-api-access-5r8zt major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~projected/kube-api-access-lzprw:{mountpoint:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~projected/kube-api-access-lzprw major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~secret/serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~projected/kube-api-access major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~projected/kube-api-access-v86j8:{mountpoint:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~projected/kube-api-access-v86j8 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~projected/kube-api-access major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~projected/kube-api-access-f92mb:{mountpoint:/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~projected/kube-api-access-f92mb major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7949621e-4da6-4e43-a1f3-2ef303bf6aa6/volumes/kubernetes.io~projected/kube-api-access-j5hsj:{mountpoint:/var/lib/kubelet/pods/7949621e-4da6-4e43-a1f3-2ef303bf6aa6/volumes/kubernetes.io~projected/kube-api-access-j5hsj major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~projected/kube-api-access-55l9j:{mountpoint:/var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~projected/kube-api-access-55l9j major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~projected/kube-api-access-hnk9k:{mountpoint:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~projected/kube-api-access-hnk9k major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~projected/kube-api-access-swxwt:{mountpoint:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~projected/kube-api-access-swxwt major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~projected/kube-api-access-9j527:{mountpoint:/var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~projected/kube-api-access-9j527 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~projected/kube-api-access-8jmlf:{mountpoint:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~projected/kube-api-access-8jmlf major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/acbaba45-12d9-40b9-818c-4b091d7929b1/volumes/kubernetes.io~projected/kube-api-access-kcgqr:{mountpoint:/var/lib/kubelet/pods/acbaba45-12d9-40b9-818c-4b091d7929b1/volumes/kubernetes.io~projected/kube-api-access-kcgqr major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b097596e-79e1-44d1-be8a-96340042a041/volumes/kubernetes.io~projected/kube-api-access-dx99f:{mountpoint:/var/lib/kubelet/pods/b097596e-79e1-44d1-be8a-96340042a041/volumes/kubernetes.io~projected/kube-api-access-dx99f major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~projected/kube-api-access-s2j6m:{mountpoint:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~projected/kube-api-access-s2j6m major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~projected/kube-api-access-z5kbh:{mountpoint:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~projected/kube-api-access-z5kbh major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~projected/kube-api-access-56bt6:{mountpoint:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~projected/kube-api-access-56bt6 major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~projected/kube-api-access-tqmzh:{mountpoint:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~projected/kube-api-access-tqmzh major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/etcd-client major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/serving-cert major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~projected/kube-api-access-zsj2w:{mountpoint:/var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~projected/kube-api-access-zsj2w major:0 minor:245 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/de4a0233af84aa1e0ede8636890c8f70629f86cf172e50bfad96ee2635973d21/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/8ad1ebb52e470060df4eb2e06be93eee046d3df3f6be8b115e77fcd336ea9665/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/943fbbfde2af9482454a4d3f59efc49d388f299bae1b373ea953dd2e46bd7907/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/180f40d12a3ee409daf0d3ba8cfcba87f09f82070282d1163b8c0fa27f904d59/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/92836eff21952cfed3970addb5e7acbb1572337356ebdf5e162d7924f6e52027/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/cb680a9525d5d7e6878d23595dccf6b46ca78ee2df069ba18ce495468eb99aab/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/79f0a3f73d848bccac450c572bb5dda831a9821cd2d0eeda80b099e59bb0ddfa/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/ae0e924aa9f07ccac185c06f6f5249b0c8fb3861fe7eab8df6265e3978c2449f/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/421fb9b0ddf531ee16fab92031a796160bcd8f2f13d72ec92ec7b45097a66725/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/1c11298b137c3b36ba72f433256cdf72277b19c9f495deb91a8aab85dfa812aa/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/ad5e65fd337f3968ec9ca90169046ac08fd2f3a51ffda26e2db2b4d3360d176e/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/93cb851bdaf25b1c13607e7fd926ecb402a7d8856f246e6f4cb38ddacb2a28e9/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/6f28bf396af4320f77ab4d3644b0f9f235f78e20488cc4f589761c12ab54d22b/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/66d85fd1207b9ab83d4b4a1d93ad14008fc147614a63bd5edfc3af857c022c19/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/c9b328dd9a39b700fb37a7131e2cc35b38fb02ce413dea25004952b14cfa8599/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/dd0c4b75690c4cd9db440d21e91401e1cd14ee1e9ec83bc1f18f2cf411b7ff62/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/82bcc607371b6f0ac7380f0c3bb8f27ddc1e90103ab5ddaa515003b18e28a3a8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/bdf2c0c59098bd49f427527f2b48088d52501a779c7b844da16b054b3cbc5f39/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/051e69eca58b28c522c759a9a8a38122e96d0ca4c35a7549b3c62fd612c192a6/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-190:{mountpoint:/var/lib/containers/storage/overlay/a7c59bc9c312dce4e4d39c3d61c90cb19134c319bf86ab95b2ea495ad14256ab/merged major:0 minor:190 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/df47865326cc5105003e95a46aac7446117b038aece9c8bd312c0b7c51a394ea/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/04ee7495fc18803cc2a8141c5658e977d5bc1dd07430525962b5411f755006df/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-255:{mountpoint:/var/lib/containers/storage/overlay/6c015b0c9e7a94df4ed2231658793762d8ac9e7ee13d85a79ac72697f3a12b66/merged major:0 minor:255 fsType:overlay blockSize:0} overlay_0-258:{mountpoint:/var/lib/containers/storage/overlay/4cab8d08bd84b7691dbe17f6512155bfdf608cd3f9090fef7e075b5a2db2c7e7/merged major:0 minor:258 fsType:overlay blockSize:0} overlay_0-268:{mountpoint:/var/lib/containers/storage/overlay/b6466fdddbc8c9e9160e6e4ab7e81be580b99b396679e15000e7b35e81dd98f0/merged major:0 minor:268 fsType:overlay blockSize:0} overlay_0-270:{mountpoint:/var/lib/containers/storage/overlay/dee3882ba86790398603aa5bc046475614e225a858388fd7aa3af165598490a3/merged major:0 minor:270 fsType:overlay blockSize:0} overlay_0-274:{mountpoint:/var/lib/containers/storage/overlay/2153f917a9bd9665ae3cf272e861561b9af05475bf9d15e26c210a27b42a4fda/merged major:0 minor:274 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/77bbc818962c576b50add13477beb8eb9f274d752e9878166ef4b924039a38a3/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/121b7a8b5d77a08b8b00c2a2ef5dfd74e7e5d2d58eabf9216210bd5d36197760/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/80c38336cfd12a9baa19a364029e07e08a1637a5a1faa3900f03c0a1840f6889/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/2d101ba379373ccba4495608e3b277b03428afb12d03c55bbc2f10b348994193/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/586d58267a039be6c67e94306ecc88995270cb42117f24cbca1a34d17a7509fd/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/bea7e69d07600b0691c31b16a9bf4cb6a460bae48b1b4ff092fa1d35f971593c/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/8b88ed77b4c2badc1bc7b366e99b7f076c5de8f2798d00566d76c631de3f02cf/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/5abff4507f27ae2b0075e11e03b0bcf1824441620372f2e2ecde998480c88ba3/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/6eab6c0f8bd9f9219d670b47b645f5cd92e4ec3f51977d458a443c9f1f51e997/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/69288cc4263b6395c99b75fca9502baf71f48ab9e567c2a0c51891a794b2936b/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/e5c4aa73327c02545261b91064f94bfa59173b4cc56c585200ae7067b3a14f1f/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/7995298ba41db3d6d03e5a6e590dadc8ef89a70590791975842309bd73e3dc4a/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/7b540a87d6e038be422cf2c4108fcc1ba01fd48d118b274153b2188b6b9e3295/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/f9604ed2fefb276a4c0fd92e26c481cd8868464c8b25cc709852c530672fe8a0/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/12c114e061b78dc9dda88915696644d50673dbc0ac6a4732cc1af97be738e904/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/402599c81a7372719b12f247fe4bf5588fbcbbb3874130779fac68749b31e8d4/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/1797d8d1476a7cf48d7a0d320d19056c30c4fddf8f8ab615b17bca7668066d93/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/a2ab3dbef3145961f67065c09993b546700fd407e43bc9be2f71b405d16893a2/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/1fd4bc0996ed8454e3048b6339d7f3704e3aa4037c3a9a205e4e48c213590347/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/e0ef24597cf0e20f1f57c91fbad5ea47fce50b8fe25328ed89b3d531b490c4d0/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/9885510bb0bfdecc58be5f122fddcfd17cbd282e51171df051a43f1ed399eca7/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 20 08:35:19.157684 master-0 kubenswrapper[7476]: I0320 08:35:19.156722 7476 manager.go:217] Machine: {Timestamp:2026-03-20 08:35:19.155792979 +0000 UTC m=+0.124561525 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:68fa82f9afdb4f4db7851aefd1680b64 SystemUUID:68fa82f9-afdb-4f4d-b785-1aefd1680b64 BootID:8450f042-88d6-4841-ac46-8e16fb0e4c12 Filesystems:[{Device:/var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~projected/kube-api-access-b67hn DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3/userdata/shm DeviceMajor:0 DeviceMinor:233 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-270 DeviceMajor:0 DeviceMinor:270 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-274 DeviceMajor:0 DeviceMinor:274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/kube-api-access-8qqcw DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~projected/kube-api-access-s2j6m DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~projected/kube-api-access-wpr8b DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~projected/kube-api-access-rdsv9 DeviceMajor:0 DeviceMinor:261 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~projected/kube-api-access-56bt6 DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:272 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8/userdata/shm DeviceMajor:0 DeviceMinor:288 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/acbaba45-12d9-40b9-818c-4b091d7929b1/volumes/kubernetes.io~projected/kube-api-access-kcgqr DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~projected/kube-api-access-f92mb DeviceMajor:0 DeviceMinor:273 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~projected/kube-api-access-55l9j DeviceMajor:0 DeviceMinor:254 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~projected/kube-api-access-8xv94 DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3776fdb6-25a1-4e3d-bdd1-437c69af3a55/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~projected/kube-api-access-w4sfm DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~projected/kube-api-access-tqmzh DeviceMajor:0 DeviceMinor:250 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~projected/kube-api-access-v86j8 DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~projected/kube-api-access-z5kbh DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~projected/kube-api-access-8jmlf DeviceMajor:0 DeviceMinor:139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~projected/kube-api-access-zsj2w DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-268 DeviceMajor:0 DeviceMinor:268 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~projected/kube-api-access-d5v7l DeviceMajor:0 DeviceMinor:253 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b097596e-79e1-44d1-be8a-96340042a041/volumes/kubernetes.io~projected/kube-api-access-dx99f DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-190 DeviceMajor:0 DeviceMinor:190 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~projected/kube-api-access-q7gdm DeviceMajor:0 DeviceMinor:257 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/kube-api-access-5r8zt DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22ff82cf-0d7d-4955-9b7c-97757acbc021/volumes/kubernetes.io~projected/kube-api-access-sglvd DeviceMajor:0 DeviceMinor:104 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~projected/kube-api-access-2dkgv DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7949621e-4da6-4e43-a1f3-2ef303bf6aa6/volumes/kubernetes.io~projected/kube-api-access-j5hsj DeviceMajor:0 DeviceMinor:92 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b/userdata/shm DeviceMajor:0 DeviceMinor:235 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~projected/kube-api-access-swxwt DeviceMajor:0 DeviceMinor:91 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~projected/kube-api-access-9j527 DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-255 DeviceMajor:0 DeviceMinor:255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-258 DeviceMajor:0 DeviceMinor:258 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~projected/kube-api-access-rgl8m DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~projected/kube-api-access-hnk9k DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~projected/kube-api-access-lzprw DeviceMajor:0 DeviceMinor:265 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980/userdata/shm DeviceMajor:0 DeviceMinor:296 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0118b40880c157c MacAddress:4a:44:a2:ab:4c:84 Speed:10000 Mtu:8900} {Name:1522904bcce5d0a MacAddress:8a:c0:df:d7:ab:00 Speed:10000 Mtu:8900} {Name:23b5f0e312ee437 MacAddress:7a:df:88:79:d9:92 Speed:10000 Mtu:8900} {Name:36fd86042cdc5d3 MacAddress:8e:51:a2:a3:be:84 Speed:10000 Mtu:8900} {Name:4767ac5e1fdc332 MacAddress:9a:0e:11:c3:86:d0 Speed:10000 Mtu:8900} {Name:7551d0384a0ca5d MacAddress:3a:2c:d5:31:54:80 Speed:10000 Mtu:8900} {Name:9540823dea8e010 MacAddress:46:d9:c7:ff:0a:d4 Speed:10000 Mtu:8900} {Name:a5a71eafba7fd09 MacAddress:e6:2b:90:2d:6f:e5 Speed:10000 Mtu:8900} {Name:b3076d6176cd94c MacAddress:66:4b:e3:44:ad:8f Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9a:ca:04:00:30:20 Speed:0 Mtu:8900} {Name:c1a1f09a0076728 MacAddress:b2:52:2a:38:69:fb Speed:10000 Mtu:8900} {Name:c29f56d4ea9bf3b MacAddress:ba:4e:37:e4:af:a7 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:64:4a:87 Speed:-1 Mtu:9000} {Name:f13b0447f1cf8eb MacAddress:c6:02:43:9c:b6:67 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:1e:1b:15:bf:6f:99 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 08:35:19.157684 master-0 kubenswrapper[7476]: I0320 08:35:19.157581 7476 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 08:35:19.158021 master-0 kubenswrapper[7476]: I0320 08:35:19.157697 7476 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 08:35:19.158153 master-0 kubenswrapper[7476]: I0320 08:35:19.158132 7476 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 08:35:19.158445 master-0 kubenswrapper[7476]: I0320 08:35:19.158412 7476 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 08:35:19.158800 master-0 kubenswrapper[7476]: I0320 08:35:19.158445 7476 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 08:35:19.160997 master-0 kubenswrapper[7476]: I0320 08:35:19.160939 7476 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 08:35:19.161047 master-0 kubenswrapper[7476]: I0320 08:35:19.161007 7476 container_manager_linux.go:303] "Creating device plugin manager" Mar 20 08:35:19.161047 master-0 kubenswrapper[7476]: I0320 08:35:19.161023 7476 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 08:35:19.161104 master-0 kubenswrapper[7476]: I0320 08:35:19.161061 7476 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 08:35:19.161365 master-0 kubenswrapper[7476]: I0320 08:35:19.161349 7476 state_mem.go:36] "Initialized new in-memory state store" Mar 20 08:35:19.161489 master-0 kubenswrapper[7476]: I0320 08:35:19.161469 7476 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 20 08:35:19.161575 master-0 kubenswrapper[7476]: I0320 08:35:19.161558 7476 kubelet.go:418] "Attempting to sync node with API server" Mar 20 08:35:19.161612 master-0 kubenswrapper[7476]: I0320 08:35:19.161582 7476 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 08:35:19.161612 master-0 kubenswrapper[7476]: I0320 08:35:19.161603 7476 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 08:35:19.161659 master-0 kubenswrapper[7476]: I0320 08:35:19.161620 7476 kubelet.go:324] "Adding apiserver pod source" Mar 20 08:35:19.161659 master-0 kubenswrapper[7476]: I0320 08:35:19.161633 7476 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 08:35:19.164165 master-0 kubenswrapper[7476]: I0320 08:35:19.164137 7476 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 20 08:35:19.164353 master-0 kubenswrapper[7476]: I0320 08:35:19.164335 7476 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 20 08:35:19.164650 master-0 kubenswrapper[7476]: I0320 08:35:19.164629 7476 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 08:35:19.164781 master-0 kubenswrapper[7476]: I0320 08:35:19.164763 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 08:35:19.164814 master-0 kubenswrapper[7476]: I0320 08:35:19.164788 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 08:35:19.164814 master-0 kubenswrapper[7476]: I0320 08:35:19.164798 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 08:35:19.164814 master-0 kubenswrapper[7476]: I0320 08:35:19.164807 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 08:35:19.164814 master-0 kubenswrapper[7476]: I0320 08:35:19.164816 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164825 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164834 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164842 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164854 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164864 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164875 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 08:35:19.164909 master-0 kubenswrapper[7476]: I0320 08:35:19.164890 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 08:35:19.165064 master-0 kubenswrapper[7476]: I0320 08:35:19.164923 7476 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 08:35:19.165329 master-0 kubenswrapper[7476]: I0320 08:35:19.165312 7476 server.go:1280] "Started kubelet" Mar 20 08:35:19.165420 master-0 kubenswrapper[7476]: I0320 08:35:19.165381 7476 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 08:35:19.165588 master-0 kubenswrapper[7476]: I0320 08:35:19.165456 7476 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 08:35:19.165626 master-0 kubenswrapper[7476]: I0320 08:35:19.165614 7476 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 20 08:35:19.166024 master-0 kubenswrapper[7476]: I0320 08:35:19.166003 7476 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 08:35:19.166589 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 20 08:35:19.166976 master-0 kubenswrapper[7476]: I0320 08:35:19.166949 7476 server.go:449] "Adding debug handlers to kubelet server" Mar 20 08:35:19.170649 master-0 kubenswrapper[7476]: I0320 08:35:19.170623 7476 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 20 08:35:19.170649 master-0 kubenswrapper[7476]: I0320 08:35:19.170649 7476 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 08:35:19.170723 master-0 kubenswrapper[7476]: I0320 08:35:19.170691 7476 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 02:10:32.87846818 +0000 UTC Mar 20 08:35:19.170723 master-0 kubenswrapper[7476]: I0320 08:35:19.170717 7476 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h35m13.707753354s for next certificate rotation Mar 20 08:35:19.172018 master-0 kubenswrapper[7476]: I0320 08:35:19.171993 7476 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 20 08:35:19.172018 master-0 kubenswrapper[7476]: I0320 08:35:19.172010 7476 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 08:35:19.172110 master-0 kubenswrapper[7476]: I0320 08:35:19.172090 7476 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 20 08:35:19.181441 master-0 kubenswrapper[7476]: I0320 08:35:19.181400 7476 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 08:35:19.182472 master-0 kubenswrapper[7476]: I0320 08:35:19.182426 7476 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 08:35:19.183131 master-0 kubenswrapper[7476]: I0320 08:35:19.183101 7476 factory.go:55] Registering systemd factory Mar 20 08:35:19.185343 master-0 kubenswrapper[7476]: I0320 08:35:19.183128 7476 factory.go:221] Registration of the systemd container factory successfully Mar 20 08:35:19.187116 master-0 kubenswrapper[7476]: I0320 08:35:19.187087 7476 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 08:35:19.189756 master-0 kubenswrapper[7476]: I0320 08:35:19.189726 7476 factory.go:153] Registering CRI-O factory Mar 20 08:35:19.189810 master-0 kubenswrapper[7476]: I0320 08:35:19.189782 7476 factory.go:221] Registration of the crio container factory successfully Mar 20 08:35:19.189902 master-0 kubenswrapper[7476]: I0320 08:35:19.189879 7476 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 08:35:19.189943 master-0 kubenswrapper[7476]: I0320 08:35:19.189918 7476 factory.go:103] Registering Raw factory Mar 20 08:35:19.190019 master-0 kubenswrapper[7476]: I0320 08:35:19.189983 7476 manager.go:1196] Started watching for new ooms in manager Mar 20 08:35:19.198402 master-0 kubenswrapper[7476]: I0320 08:35:19.198373 7476 manager.go:319] Starting recovery of all containers Mar 20 08:35:19.201712 master-0 kubenswrapper[7476]: I0320 08:35:19.201673 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 20 08:35:19.201883 master-0 kubenswrapper[7476]: I0320 08:35:19.201855 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf" seLinuxMountContext="" Mar 20 08:35:19.202011 master-0 kubenswrapper[7476]: I0320 08:35:19.201998 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert" seLinuxMountContext="" Mar 20 08:35:19.202151 master-0 kubenswrapper[7476]: I0320 08:35:19.202134 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23003a2f-2053-47cc-8133-23eb886d4da0" volumeName="kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm" seLinuxMountContext="" Mar 20 08:35:19.202214 master-0 kubenswrapper[7476]: I0320 08:35:19.202202 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" volumeName="kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access" seLinuxMountContext="" Mar 20 08:35:19.202304 master-0 kubenswrapper[7476]: I0320 08:35:19.202291 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71ca96e8-5108-455c-bb3c-17977d38e912" volumeName="kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.202373 master-0 kubenswrapper[7476]: I0320 08:35:19.202360 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert" seLinuxMountContext="" Mar 20 08:35:19.202486 master-0 kubenswrapper[7476]: I0320 08:35:19.202470 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="acbaba45-12d9-40b9-818c-4b091d7929b1" volumeName="kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr" seLinuxMountContext="" Mar 20 08:35:19.202556 master-0 kubenswrapper[7476]: I0320 08:35:19.202544 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b097596e-79e1-44d1-be8a-96340042a041" volumeName="kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f" seLinuxMountContext="" Mar 20 08:35:19.202609 master-0 kubenswrapper[7476]: I0320 08:35:19.202598 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh" seLinuxMountContext="" Mar 20 08:35:19.202668 master-0 kubenswrapper[7476]: I0320 08:35:19.202656 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config" seLinuxMountContext="" Mar 20 08:35:19.202723 master-0 kubenswrapper[7476]: I0320 08:35:19.202712 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ff930f-ec0d-40ed-a879-1546691f685d" volumeName="kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config" seLinuxMountContext="" Mar 20 08:35:19.202808 master-0 kubenswrapper[7476]: I0320 08:35:19.202796 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw" seLinuxMountContext="" Mar 20 08:35:19.202867 master-0 kubenswrapper[7476]: I0320 08:35:19.202856 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23003a2f-2053-47cc-8133-23eb886d4da0" volumeName="kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca" seLinuxMountContext="" Mar 20 08:35:19.202925 master-0 kubenswrapper[7476]: I0320 08:35:19.202914 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token" seLinuxMountContext="" Mar 20 08:35:19.202979 master-0 kubenswrapper[7476]: I0320 08:35:19.202967 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" volumeName="kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config" seLinuxMountContext="" Mar 20 08:35:19.203037 master-0 kubenswrapper[7476]: I0320 08:35:19.203024 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config" seLinuxMountContext="" Mar 20 08:35:19.203089 master-0 kubenswrapper[7476]: I0320 08:35:19.203078 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9425526-9f51-4302-a19d-a8107f56c582" volumeName="kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.203228 master-0 kubenswrapper[7476]: I0320 08:35:19.203195 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3065e4b4-4493-41ce-b9d2-89315475f74f" volumeName="kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates" seLinuxMountContext="" Mar 20 08:35:19.203388 master-0 kubenswrapper[7476]: I0320 08:35:19.203374 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt" seLinuxMountContext="" Mar 20 08:35:19.203527 master-0 kubenswrapper[7476]: I0320 08:35:19.203488 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config" seLinuxMountContext="" Mar 20 08:35:19.203656 master-0 kubenswrapper[7476]: I0320 08:35:19.203638 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm" seLinuxMountContext="" Mar 20 08:35:19.203784 master-0 kubenswrapper[7476]: I0320 08:35:19.203772 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 08:35:19.203845 master-0 kubenswrapper[7476]: I0320 08:35:19.203834 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3065e4b4-4493-41ce-b9d2-89315475f74f" volumeName="kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b" seLinuxMountContext="" Mar 20 08:35:19.203964 master-0 kubenswrapper[7476]: I0320 08:35:19.203952 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" volumeName="kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca" seLinuxMountContext="" Mar 20 08:35:19.204081 master-0 kubenswrapper[7476]: I0320 08:35:19.204064 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client" seLinuxMountContext="" Mar 20 08:35:19.204470 master-0 kubenswrapper[7476]: I0320 08:35:19.204456 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff2dfe9d-2834-43cb-b093-0831b2b87131" volumeName="kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w" seLinuxMountContext="" Mar 20 08:35:19.204857 master-0 kubenswrapper[7476]: I0320 08:35:19.204844 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca" seLinuxMountContext="" Mar 20 08:35:19.205063 master-0 kubenswrapper[7476]: I0320 08:35:19.205043 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ff930f-ec0d-40ed-a879-1546691f685d" volumeName="kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.205233 master-0 kubenswrapper[7476]: I0320 08:35:19.205214 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config" seLinuxMountContext="" Mar 20 08:35:19.205367 master-0 kubenswrapper[7476]: I0320 08:35:19.205348 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" volumeName="kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m" seLinuxMountContext="" Mar 20 08:35:19.205489 master-0 kubenswrapper[7476]: I0320 08:35:19.205469 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca" seLinuxMountContext="" Mar 20 08:35:19.205664 master-0 kubenswrapper[7476]: I0320 08:35:19.205653 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d26f719-43b9-4c1c-9a54-ff800177db68" volumeName="kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca" seLinuxMountContext="" Mar 20 08:35:19.205809 master-0 kubenswrapper[7476]: I0320 08:35:19.205780 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7949621e-4da6-4e43-a1f3-2ef303bf6aa6" volumeName="kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy" seLinuxMountContext="" Mar 20 08:35:19.205911 master-0 kubenswrapper[7476]: I0320 08:35:19.205899 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7949621e-4da6-4e43-a1f3-2ef303bf6aa6" volumeName="kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj" seLinuxMountContext="" Mar 20 08:35:19.206050 master-0 kubenswrapper[7476]: I0320 08:35:19.206011 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" volumeName="kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config" seLinuxMountContext="" Mar 20 08:35:19.206157 master-0 kubenswrapper[7476]: I0320 08:35:19.206141 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74bebf0b-6727-4959-8239-a9389e630524" volumeName="kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb" seLinuxMountContext="" Mar 20 08:35:19.206287 master-0 kubenswrapper[7476]: I0320 08:35:19.206274 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.206398 master-0 kubenswrapper[7476]: I0320 08:35:19.206380 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca" seLinuxMountContext="" Mar 20 08:35:19.206454 master-0 kubenswrapper[7476]: I0320 08:35:19.206443 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy" seLinuxMountContext="" Mar 20 08:35:19.206615 master-0 kubenswrapper[7476]: I0320 08:35:19.206601 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71ca96e8-5108-455c-bb3c-17977d38e912" volumeName="kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access" seLinuxMountContext="" Mar 20 08:35:19.206727 master-0 kubenswrapper[7476]: I0320 08:35:19.206716 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images" seLinuxMountContext="" Mar 20 08:35:19.206789 master-0 kubenswrapper[7476]: I0320 08:35:19.206777 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e79950f-50a5-46ec-b836-7a35dcce2851" volumeName="kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9" seLinuxMountContext="" Mar 20 08:35:19.206944 master-0 kubenswrapper[7476]: I0320 08:35:19.206899 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" volumeName="kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw" seLinuxMountContext="" Mar 20 08:35:19.207005 master-0 kubenswrapper[7476]: I0320 08:35:19.206993 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" volumeName="kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access" seLinuxMountContext="" Mar 20 08:35:19.207120 master-0 kubenswrapper[7476]: I0320 08:35:19.207108 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle" seLinuxMountContext="" Mar 20 08:35:19.207205 master-0 kubenswrapper[7476]: I0320 08:35:19.207174 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib" seLinuxMountContext="" Mar 20 08:35:19.207337 master-0 kubenswrapper[7476]: I0320 08:35:19.207324 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ff930f-ec0d-40ed-a879-1546691f685d" volumeName="kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l" seLinuxMountContext="" Mar 20 08:35:19.207394 master-0 kubenswrapper[7476]: I0320 08:35:19.207383 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" volumeName="kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config" seLinuxMountContext="" Mar 20 08:35:19.207445 master-0 kubenswrapper[7476]: I0320 08:35:19.207435 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k" seLinuxMountContext="" Mar 20 08:35:19.207517 master-0 kubenswrapper[7476]: I0320 08:35:19.207506 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 20 08:35:19.207635 master-0 kubenswrapper[7476]: I0320 08:35:19.207606 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5707066a-bd66-41bc-8cea-cff1630ab5ee" volumeName="kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv" seLinuxMountContext="" Mar 20 08:35:19.207752 master-0 kubenswrapper[7476]: I0320 08:35:19.207741 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" volumeName="kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.207813 master-0 kubenswrapper[7476]: I0320 08:35:19.207802 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71ca96e8-5108-455c-bb3c-17977d38e912" volumeName="kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config" seLinuxMountContext="" Mar 20 08:35:19.207921 master-0 kubenswrapper[7476]: I0320 08:35:19.207904 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle" seLinuxMountContext="" Mar 20 08:35:19.207978 master-0 kubenswrapper[7476]: I0320 08:35:19.207967 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides" seLinuxMountContext="" Mar 20 08:35:19.208094 master-0 kubenswrapper[7476]: I0320 08:35:19.208077 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" volumeName="kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94" seLinuxMountContext="" Mar 20 08:35:19.208150 master-0 kubenswrapper[7476]: I0320 08:35:19.208138 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" volumeName="kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.208206 master-0 kubenswrapper[7476]: I0320 08:35:19.208195 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides" seLinuxMountContext="" Mar 20 08:35:19.208334 master-0 kubenswrapper[7476]: I0320 08:35:19.208314 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token" seLinuxMountContext="" Mar 20 08:35:19.208399 master-0 kubenswrapper[7476]: I0320 08:35:19.208388 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" volumeName="kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.208459 master-0 kubenswrapper[7476]: I0320 08:35:19.208448 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d26f719-43b9-4c1c-9a54-ff800177db68" volumeName="kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8" seLinuxMountContext="" Mar 20 08:35:19.208570 master-0 kubenswrapper[7476]: I0320 08:35:19.208559 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9817d1ec-3d7c-49fb-8e41-26f5727ef9e8" volumeName="kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt" seLinuxMountContext="" Mar 20 08:35:19.208624 master-0 kubenswrapper[7476]: I0320 08:35:19.208614 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b097596e-79e1-44d1-be8a-96340042a041" volumeName="kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script" seLinuxMountContext="" Mar 20 08:35:19.208677 master-0 kubenswrapper[7476]: I0320 08:35:19.208666 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides" seLinuxMountContext="" Mar 20 08:35:19.208728 master-0 kubenswrapper[7476]: I0320 08:35:19.208718 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m" seLinuxMountContext="" Mar 20 08:35:19.208843 master-0 kubenswrapper[7476]: I0320 08:35:19.208832 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9425526-9f51-4302-a19d-a8107f56c582" volumeName="kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets" seLinuxMountContext="" Mar 20 08:35:19.208896 master-0 kubenswrapper[7476]: I0320 08:35:19.208886 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9425526-9f51-4302-a19d-a8107f56c582" volumeName="kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh" seLinuxMountContext="" Mar 20 08:35:19.208946 master-0 kubenswrapper[7476]: I0320 08:35:19.208936 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd" seLinuxMountContext="" Mar 20 08:35:19.209001 master-0 kubenswrapper[7476]: I0320 08:35:19.208990 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" volumeName="kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.209053 master-0 kubenswrapper[7476]: I0320 08:35:19.209043 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" volumeName="kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config" seLinuxMountContext="" Mar 20 08:35:19.209187 master-0 kubenswrapper[7476]: I0320 08:35:19.209175 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" volumeName="kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.209245 master-0 kubenswrapper[7476]: I0320 08:35:19.209234 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9817d1ec-3d7c-49fb-8e41-26f5727ef9e8" volumeName="kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls" seLinuxMountContext="" Mar 20 08:35:19.209313 master-0 kubenswrapper[7476]: I0320 08:35:19.209300 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm" seLinuxMountContext="" Mar 20 08:35:19.209377 master-0 kubenswrapper[7476]: I0320 08:35:19.209366 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" volumeName="kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config" seLinuxMountContext="" Mar 20 08:35:19.209436 master-0 kubenswrapper[7476]: I0320 08:35:19.209423 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" volumeName="kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access" seLinuxMountContext="" Mar 20 08:35:19.209493 master-0 kubenswrapper[7476]: I0320 08:35:19.209483 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7949621e-4da6-4e43-a1f3-2ef303bf6aa6" volumeName="kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config" seLinuxMountContext="" Mar 20 08:35:19.209549 master-0 kubenswrapper[7476]: I0320 08:35:19.209538 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ab32efc-7cc5-4e36-9c1c-05efb19914e2" volumeName="kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j" seLinuxMountContext="" Mar 20 08:35:19.209603 master-0 kubenswrapper[7476]: I0320 08:35:19.209591 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.209655 master-0 kubenswrapper[7476]: I0320 08:35:19.209645 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00350ac7-b40a-4459-b94c-a37d7b613645" volumeName="kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn" seLinuxMountContext="" Mar 20 08:35:19.209720 master-0 kubenswrapper[7476]: I0320 08:35:19.209709 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5707066a-bd66-41bc-8cea-cff1630ab5ee" volumeName="kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config" seLinuxMountContext="" Mar 20 08:35:19.209805 master-0 kubenswrapper[7476]: I0320 08:35:19.209762 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ce482dc-d0ac-40bc-9058-a1cfdc81575e" volumeName="kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527" seLinuxMountContext="" Mar 20 08:35:19.209865 master-0 kubenswrapper[7476]: I0320 08:35:19.209855 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6" seLinuxMountContext="" Mar 20 08:35:19.209967 master-0 kubenswrapper[7476]: I0320 08:35:19.209942 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca" seLinuxMountContext="" Mar 20 08:35:19.210023 master-0 kubenswrapper[7476]: I0320 08:35:19.210012 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3065e4b4-4493-41ce-b9d2-89315475f74f" volumeName="kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert" seLinuxMountContext="" Mar 20 08:35:19.210114 master-0 kubenswrapper[7476]: I0320 08:35:19.210103 7476 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config" seLinuxMountContext="" Mar 20 08:35:19.210166 master-0 kubenswrapper[7476]: I0320 08:35:19.210156 7476 reconstruct.go:97] "Volume reconstruction finished" Mar 20 08:35:19.210216 master-0 kubenswrapper[7476]: I0320 08:35:19.210208 7476 reconciler.go:26] "Reconciler: start to sync state" Mar 20 08:35:19.213157 master-0 kubenswrapper[7476]: I0320 08:35:19.213031 7476 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 20 08:35:19.233895 master-0 kubenswrapper[7476]: I0320 08:35:19.233843 7476 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 08:35:19.235475 master-0 kubenswrapper[7476]: I0320 08:35:19.235454 7476 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 08:35:19.235526 master-0 kubenswrapper[7476]: I0320 08:35:19.235498 7476 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 08:35:19.235526 master-0 kubenswrapper[7476]: I0320 08:35:19.235519 7476 kubelet.go:2335] "Starting kubelet main sync loop" Mar 20 08:35:19.235590 master-0 kubenswrapper[7476]: E0320 08:35:19.235569 7476 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 08:35:19.238288 master-0 kubenswrapper[7476]: I0320 08:35:19.238253 7476 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 08:35:19.246654 master-0 kubenswrapper[7476]: I0320 08:35:19.246609 7476 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="66935bc88a172084ce89ee3474a8817878b895f87e27bbd9f994bbea54a28d58" exitCode=0 Mar 20 08:35:19.246705 master-0 kubenswrapper[7476]: I0320 08:35:19.246655 7476 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="1ad464d19cae2361db03cbce68a3a46d3a3a7e57495ff1c59b795128f430f3c3" exitCode=0 Mar 20 08:35:19.246705 master-0 kubenswrapper[7476]: I0320 08:35:19.246676 7476 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="633e246d0eb69524c4e825553d8b2a17d7166e97b618f96a41148d7625aa5ed0" exitCode=0 Mar 20 08:35:19.246705 master-0 kubenswrapper[7476]: I0320 08:35:19.246692 7476 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="49a024c7c79250dd61c634f6e633e0edd247a3c463686f54208b638a2fd19ebb" exitCode=0 Mar 20 08:35:19.246788 master-0 kubenswrapper[7476]: I0320 08:35:19.246707 7476 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="e286a3213c5346d10ff0d6cbc953c4d1baa37806e4134a08a01aa0b21b03e73b" exitCode=0 Mar 20 08:35:19.246788 master-0 kubenswrapper[7476]: I0320 08:35:19.246727 7476 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="40ff7a57f1be617cf7f13a7b182aa09a2d94c4736efa61da1185a107268ed08d" exitCode=0 Mar 20 08:35:19.250812 master-0 kubenswrapper[7476]: I0320 08:35:19.250774 7476 generic.go:334] "Generic (PLEG): container finished" podID="31e4700c-9389-427e-95ef-187f80c9e607" containerID="c7aa165c0986788c15e1247a68719a95f704ec935f16e843c43124bc75fd9639" exitCode=0 Mar 20 08:35:19.254799 master-0 kubenswrapper[7476]: I0320 08:35:19.254767 7476 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8" exitCode=0 Mar 20 08:35:19.257332 master-0 kubenswrapper[7476]: I0320 08:35:19.257303 7476 generic.go:334] "Generic (PLEG): container finished" podID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerID="9c4160ccfce4a1ed7d4a8b39bc1968845b7b8a2ab8792b3e93cfa7765e5fa689" exitCode=0 Mar 20 08:35:19.270360 master-0 kubenswrapper[7476]: I0320 08:35:19.270159 7476 generic.go:334] "Generic (PLEG): container finished" podID="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" containerID="c61822f24caad65a896a136b258da1c07b65503ea37e7992a32f53bc007f40ea" exitCode=0 Mar 20 08:35:19.289909 master-0 kubenswrapper[7476]: I0320 08:35:19.289859 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 20 08:35:19.290371 master-0 kubenswrapper[7476]: I0320 08:35:19.290337 7476 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf" exitCode=1 Mar 20 08:35:19.290421 master-0 kubenswrapper[7476]: I0320 08:35:19.290370 7476 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="f46237550fb6588ccbb218d4b52be58120b3dd1d98e107a7ca8477306baad5dd" exitCode=0 Mar 20 08:35:19.335723 master-0 kubenswrapper[7476]: E0320 08:35:19.335675 7476 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:35:19.341975 master-0 kubenswrapper[7476]: I0320 08:35:19.341924 7476 manager.go:324] Recovery completed Mar 20 08:35:19.386388 master-0 kubenswrapper[7476]: I0320 08:35:19.386355 7476 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 20 08:35:19.386388 master-0 kubenswrapper[7476]: I0320 08:35:19.386383 7476 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 20 08:35:19.386526 master-0 kubenswrapper[7476]: I0320 08:35:19.386404 7476 state_mem.go:36] "Initialized new in-memory state store" Mar 20 08:35:19.386614 master-0 kubenswrapper[7476]: I0320 08:35:19.386594 7476 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 08:35:19.386658 master-0 kubenswrapper[7476]: I0320 08:35:19.386613 7476 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 08:35:19.386658 master-0 kubenswrapper[7476]: I0320 08:35:19.386641 7476 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 20 08:35:19.386658 master-0 kubenswrapper[7476]: I0320 08:35:19.386649 7476 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 20 08:35:19.386658 master-0 kubenswrapper[7476]: I0320 08:35:19.386657 7476 policy_none.go:49] "None policy: Start" Mar 20 08:35:19.387725 master-0 kubenswrapper[7476]: I0320 08:35:19.387700 7476 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 08:35:19.387725 master-0 kubenswrapper[7476]: I0320 08:35:19.387726 7476 state_mem.go:35] "Initializing new in-memory state store" Mar 20 08:35:19.387927 master-0 kubenswrapper[7476]: I0320 08:35:19.387909 7476 state_mem.go:75] "Updated machine memory state" Mar 20 08:35:19.387976 master-0 kubenswrapper[7476]: I0320 08:35:19.387928 7476 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 20 08:35:19.404247 master-0 kubenswrapper[7476]: I0320 08:35:19.404200 7476 manager.go:334] "Starting Device Plugin manager" Mar 20 08:35:19.404538 master-0 kubenswrapper[7476]: I0320 08:35:19.404506 7476 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 08:35:19.404585 master-0 kubenswrapper[7476]: I0320 08:35:19.404571 7476 server.go:79] "Starting device plugin registration server" Mar 20 08:35:19.405091 master-0 kubenswrapper[7476]: I0320 08:35:19.405065 7476 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 08:35:19.405139 master-0 kubenswrapper[7476]: I0320 08:35:19.405086 7476 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 08:35:19.405356 master-0 kubenswrapper[7476]: I0320 08:35:19.405313 7476 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 08:35:19.405619 master-0 kubenswrapper[7476]: I0320 08:35:19.405585 7476 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 08:35:19.405659 master-0 kubenswrapper[7476]: I0320 08:35:19.405620 7476 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 08:35:19.506375 master-0 kubenswrapper[7476]: I0320 08:35:19.506171 7476 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:35:19.508910 master-0 kubenswrapper[7476]: I0320 08:35:19.508846 7476 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:35:19.508910 master-0 kubenswrapper[7476]: I0320 08:35:19.508927 7476 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:35:19.508910 master-0 kubenswrapper[7476]: I0320 08:35:19.508945 7476 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:35:19.509461 master-0 kubenswrapper[7476]: I0320 08:35:19.509028 7476 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:35:19.523366 master-0 kubenswrapper[7476]: I0320 08:35:19.523319 7476 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 20 08:35:19.523483 master-0 kubenswrapper[7476]: I0320 08:35:19.523422 7476 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 20 08:35:19.536272 master-0 kubenswrapper[7476]: I0320 08:35:19.536183 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 20 08:35:19.536600 master-0 kubenswrapper[7476]: I0320 08:35:19.536577 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e8b4228a54d1b12d0376b112666ad3f67faf746c66c4939b3583b4e7a339cfe" Mar 20 08:35:19.536668 master-0 kubenswrapper[7476]: I0320 08:35:19.536603 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898"} Mar 20 08:35:19.536668 master-0 kubenswrapper[7476]: I0320 08:35:19.536650 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c"} Mar 20 08:35:19.536668 master-0 kubenswrapper[7476]: I0320 08:35:19.536660 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerDied","Data":"505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536671 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"49fac1b46a11e49501805e891baae4a9","Type":"ContainerStarted","Data":"89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536685 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093" Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536711 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cfd277b4fa13917f4d0cc04f7d6bdc6ea5d4df628ab0e4b86103cf26da62a23f"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536719 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"9ab09f622201f872e04a1e8a769261f4a46a4d60637dffa9e2a3458905508cd2"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536730 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536739 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"bf19448fe2db422f2021f6a9801b4117923acb1b2003982f366081b4de585441"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536749 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"d7f4830141ed7d49d20e31769c038ca8340ad71b0bddea39298dca3d6416b345"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536758 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"d664a6d0d2a24360dee10612610f1b59","Type":"ContainerStarted","Data":"1ca6b41abdff6af562839f350ede4490e65a1341fc4f1ed50c580d41768ec8c0"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536767 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"f3cf6c6c759bc79e0c49a7c2679b7d5ff1593a53a6783b3355ac6464233ad33d"} Mar 20 08:35:19.536782 master-0 kubenswrapper[7476]: I0320 08:35:19.536776 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332"} Mar 20 08:35:19.537188 master-0 kubenswrapper[7476]: I0320 08:35:19.536798 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"7316fd0f0f8a186ef4fb758bcbe38162f541b908e7728b02280dc9e29c6d0538"} Mar 20 08:35:19.537188 master-0 kubenswrapper[7476]: I0320 08:35:19.536807 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf"} Mar 20 08:35:19.537188 master-0 kubenswrapper[7476]: I0320 08:35:19.536817 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"f46237550fb6588ccbb218d4b52be58120b3dd1d98e107a7ca8477306baad5dd"} Mar 20 08:35:19.537188 master-0 kubenswrapper[7476]: I0320 08:35:19.536827 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957"} Mar 20 08:35:19.552193 master-0 kubenswrapper[7476]: E0320 08:35:19.551958 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.553129 master-0 kubenswrapper[7476]: E0320 08:35:19.553094 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.553253 master-0 kubenswrapper[7476]: W0320 08:35:19.553232 7476 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 20 08:35:19.553357 master-0 kubenswrapper[7476]: E0320 08:35:19.553312 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.553690 master-0 kubenswrapper[7476]: E0320 08:35:19.553657 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.554155 master-0 kubenswrapper[7476]: E0320 08:35:19.554100 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.614497 master-0 kubenswrapper[7476]: I0320 08:35:19.614437 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.614497 master-0 kubenswrapper[7476]: I0320 08:35:19.614500 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614530 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614549 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614565 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614579 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614592 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614613 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614630 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614651 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614669 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614684 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614697 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614713 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614729 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614744 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.614749 master-0 kubenswrapper[7476]: I0320 08:35:19.614766 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.715770 master-0 kubenswrapper[7476]: I0320 08:35:19.715713 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.715770 master-0 kubenswrapper[7476]: I0320 08:35:19.715766 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.715770 master-0 kubenswrapper[7476]: I0320 08:35:19.715787 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715802 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715819 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715838 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715854 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715867 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715880 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715894 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715907 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715921 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715936 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715956 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715974 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.715989 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716034 master-0 kubenswrapper[7476]: I0320 08:35:19.716004 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716049 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716100 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716122 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716145 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716167 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716190 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:35:19.716526 master-0 kubenswrapper[7476]: I0320 08:35:19.716480 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.716693 master-0 kubenswrapper[7476]: I0320 08:35:19.716539 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.716693 master-0 kubenswrapper[7476]: I0320 08:35:19.716584 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716693 master-0 kubenswrapper[7476]: I0320 08:35:19.716610 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716693 master-0 kubenswrapper[7476]: I0320 08:35:19.716656 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:35:19.716693 master-0 kubenswrapper[7476]: I0320 08:35:19.716676 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.716825 master-0 kubenswrapper[7476]: I0320 08:35:19.716731 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:35:19.716825 master-0 kubenswrapper[7476]: I0320 08:35:19.716733 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716825 master-0 kubenswrapper[7476]: I0320 08:35:19.716751 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716825 master-0 kubenswrapper[7476]: I0320 08:35:19.716811 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:19.716825 master-0 kubenswrapper[7476]: I0320 08:35:19.716811 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"46f265536aba6292ead501bc9b49f327\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:19.900179 master-0 kubenswrapper[7476]: I0320 08:35:19.900089 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:20.162900 master-0 kubenswrapper[7476]: I0320 08:35:20.162765 7476 apiserver.go:52] "Watching apiserver" Mar 20 08:35:20.172517 master-0 kubenswrapper[7476]: I0320 08:35:20.172467 7476 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 08:35:20.173568 master-0 kubenswrapper[7476]: I0320 08:35:20.173512 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk","openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6","openshift-multus/multus-additional-cni-plugins-x7vrg","openshift-network-operator/network-operator-7bd846bfc4-x4w25","openshift-etcd/etcd-master-0-master-0","openshift-multus/network-metrics-daemon-nfrth","openshift-ovn-kubernetes/ovnkube-node-bvndl","openshift-network-diagnostics/network-check-target-j9jjm","openshift-network-operator/iptables-alerter-9xlf2","openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt","openshift-ingress-operator/ingress-operator-66b84d69b-dknxr","openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf","openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd","kube-system/bootstrap-kube-scheduler-master-0","openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq","openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp","openshift-dns-operator/dns-operator-9c5679d8f-xfns6","openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs","openshift-marketplace/marketplace-operator-89ccd998f-j84r8","assisted-installer/assisted-installer-controller-j6hxl","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6","openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98","kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx","openshift-multus/multus-pxqwj","openshift-network-node-identity/network-node-identity-dq29v","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl","openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 20 08:35:20.173914 master-0 kubenswrapper[7476]: I0320 08:35:20.173860 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:35:20.173963 master-0 kubenswrapper[7476]: I0320 08:35:20.173913 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.173963 master-0 kubenswrapper[7476]: I0320 08:35:20.173913 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:20.173963 master-0 kubenswrapper[7476]: I0320 08:35:20.173941 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.174106 master-0 kubenswrapper[7476]: I0320 08:35:20.174052 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.176186 master-0 kubenswrapper[7476]: I0320 08:35:20.176145 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:20.177249 master-0 kubenswrapper[7476]: I0320 08:35:20.177197 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 08:35:20.177351 master-0 kubenswrapper[7476]: I0320 08:35:20.177296 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.177456 master-0 kubenswrapper[7476]: I0320 08:35:20.177389 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.177560 master-0 kubenswrapper[7476]: I0320 08:35:20.177523 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.177626 master-0 kubenswrapper[7476]: I0320 08:35:20.177606 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:20.178028 master-0 kubenswrapper[7476]: I0320 08:35:20.177991 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:20.184384 master-0 kubenswrapper[7476]: I0320 08:35:20.182915 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 20 08:35:20.184384 master-0 kubenswrapper[7476]: I0320 08:35:20.182897 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 08:35:20.184384 master-0 kubenswrapper[7476]: I0320 08:35:20.183326 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 08:35:20.184384 master-0 kubenswrapper[7476]: I0320 08:35:20.184022 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.184384 master-0 kubenswrapper[7476]: I0320 08:35:20.184198 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 20 08:35:20.186118 master-0 kubenswrapper[7476]: I0320 08:35:20.186061 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 08:35:20.186457 master-0 kubenswrapper[7476]: I0320 08:35:20.186437 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.186703 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.187083 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.187091 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.187387 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.187469 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.187512 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 08:35:20.187791 master-0 kubenswrapper[7476]: I0320 08:35:20.187582 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.195219 master-0 kubenswrapper[7476]: I0320 08:35:20.195156 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 08:35:20.195548 master-0 kubenswrapper[7476]: I0320 08:35:20.195492 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:20.196391 master-0 kubenswrapper[7476]: I0320 08:35:20.196358 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:20.198527 master-0 kubenswrapper[7476]: I0320 08:35:20.198491 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:20.199171 master-0 kubenswrapper[7476]: I0320 08:35:20.199142 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 08:35:20.199434 master-0 kubenswrapper[7476]: I0320 08:35:20.199366 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 08:35:20.202194 master-0 kubenswrapper[7476]: I0320 08:35:20.202159 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 08:35:20.202281 master-0 kubenswrapper[7476]: I0320 08:35:20.202216 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 08:35:20.202281 master-0 kubenswrapper[7476]: I0320 08:35:20.202231 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 08:35:20.202411 master-0 kubenswrapper[7476]: I0320 08:35:20.202388 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 20 08:35:20.202459 master-0 kubenswrapper[7476]: I0320 08:35:20.202417 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 20 08:35:20.202666 master-0 kubenswrapper[7476]: I0320 08:35:20.202648 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.202739 master-0 kubenswrapper[7476]: I0320 08:35:20.202678 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 08:35:20.202791 master-0 kubenswrapper[7476]: I0320 08:35:20.202745 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 20 08:35:20.202791 master-0 kubenswrapper[7476]: I0320 08:35:20.202778 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 08:35:20.202864 master-0 kubenswrapper[7476]: I0320 08:35:20.202832 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 08:35:20.202925 master-0 kubenswrapper[7476]: I0320 08:35:20.202908 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 08:35:20.202998 master-0 kubenswrapper[7476]: I0320 08:35:20.202969 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 08:35:20.203082 master-0 kubenswrapper[7476]: I0320 08:35:20.203061 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 08:35:20.203082 master-0 kubenswrapper[7476]: I0320 08:35:20.203071 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 08:35:20.203186 master-0 kubenswrapper[7476]: I0320 08:35:20.202909 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.203223 master-0 kubenswrapper[7476]: I0320 08:35:20.203186 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 08:35:20.203319 master-0 kubenswrapper[7476]: I0320 08:35:20.203298 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.203369 master-0 kubenswrapper[7476]: I0320 08:35:20.203326 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 08:35:20.203430 master-0 kubenswrapper[7476]: I0320 08:35:20.203406 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 08:35:20.203430 master-0 kubenswrapper[7476]: I0320 08:35:20.203411 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 08:35:20.203508 master-0 kubenswrapper[7476]: I0320 08:35:20.203492 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 20 08:35:20.203556 master-0 kubenswrapper[7476]: I0320 08:35:20.203538 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.203668 master-0 kubenswrapper[7476]: I0320 08:35:20.203642 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 08:35:20.203722 master-0 kubenswrapper[7476]: I0320 08:35:20.203685 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 20 08:35:20.203760 master-0 kubenswrapper[7476]: I0320 08:35:20.203727 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 08:35:20.203800 master-0 kubenswrapper[7476]: I0320 08:35:20.203761 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.203800 master-0 kubenswrapper[7476]: I0320 08:35:20.203731 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 08:35:20.203871 master-0 kubenswrapper[7476]: I0320 08:35:20.203497 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.203871 master-0 kubenswrapper[7476]: I0320 08:35:20.203839 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 08:35:20.203871 master-0 kubenswrapper[7476]: I0320 08:35:20.203856 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 08:35:20.203978 master-0 kubenswrapper[7476]: I0320 08:35:20.203649 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 20 08:35:20.204018 master-0 kubenswrapper[7476]: I0320 08:35:20.203995 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:35:20.204101 master-0 kubenswrapper[7476]: I0320 08:35:20.204011 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 08:35:20.204173 master-0 kubenswrapper[7476]: I0320 08:35:20.204154 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.204173 master-0 kubenswrapper[7476]: I0320 08:35:20.204160 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 20 08:35:20.204273 master-0 kubenswrapper[7476]: I0320 08:35:20.204254 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 08:35:20.204583 master-0 kubenswrapper[7476]: I0320 08:35:20.204556 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 08:35:20.204726 master-0 kubenswrapper[7476]: I0320 08:35:20.204670 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 08:35:20.204959 master-0 kubenswrapper[7476]: I0320 08:35:20.203694 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 08:35:20.205123 master-0 kubenswrapper[7476]: I0320 08:35:20.204072 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:35:20.205123 master-0 kubenswrapper[7476]: I0320 08:35:20.204083 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 08:35:20.205123 master-0 kubenswrapper[7476]: I0320 08:35:20.204094 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.205123 master-0 kubenswrapper[7476]: I0320 08:35:20.204116 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:35:20.205481 master-0 kubenswrapper[7476]: I0320 08:35:20.204121 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 20 08:35:20.205481 master-0 kubenswrapper[7476]: I0320 08:35:20.204022 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 08:35:20.205481 master-0 kubenswrapper[7476]: I0320 08:35:20.205413 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 08:35:20.205711 master-0 kubenswrapper[7476]: I0320 08:35:20.205486 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 20 08:35:20.205711 master-0 kubenswrapper[7476]: I0320 08:35:20.205548 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 08:35:20.205711 master-0 kubenswrapper[7476]: I0320 08:35:20.205639 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 20 08:35:20.207176 master-0 kubenswrapper[7476]: I0320 08:35:20.207151 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 08:35:20.207333 master-0 kubenswrapper[7476]: I0320 08:35:20.207313 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:35:20.211091 master-0 kubenswrapper[7476]: I0320 08:35:20.211044 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 08:35:20.211091 master-0 kubenswrapper[7476]: I0320 08:35:20.211075 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 08:35:20.214372 master-0 kubenswrapper[7476]: I0320 08:35:20.214339 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.214896 master-0 kubenswrapper[7476]: I0320 08:35:20.214877 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 08:35:20.215426 master-0 kubenswrapper[7476]: I0320 08:35:20.214991 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.215426 master-0 kubenswrapper[7476]: I0320 08:35:20.215013 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.215426 master-0 kubenswrapper[7476]: I0320 08:35:20.215296 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 20 08:35:20.215426 master-0 kubenswrapper[7476]: I0320 08:35:20.215331 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 08:35:20.215426 master-0 kubenswrapper[7476]: I0320 08:35:20.215376 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 08:35:20.215602 master-0 kubenswrapper[7476]: I0320 08:35:20.215532 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 08:35:20.215602 master-0 kubenswrapper[7476]: I0320 08:35:20.215541 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 08:35:20.216234 master-0 kubenswrapper[7476]: I0320 08:35:20.215992 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 08:35:20.216727 master-0 kubenswrapper[7476]: I0320 08:35:20.216703 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 08:35:20.216795 master-0 kubenswrapper[7476]: I0320 08:35:20.216012 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 08:35:20.216795 master-0 kubenswrapper[7476]: I0320 08:35:20.216775 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 08:35:20.216907 master-0 kubenswrapper[7476]: I0320 08:35:20.216007 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 08:35:20.216943 master-0 kubenswrapper[7476]: I0320 08:35:20.216058 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 08:35:20.216969 master-0 kubenswrapper[7476]: I0320 08:35:20.216942 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 08:35:20.216998 master-0 kubenswrapper[7476]: I0320 08:35:20.216095 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 08:35:20.217054 master-0 kubenswrapper[7476]: I0320 08:35:20.217036 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 08:35:20.217117 master-0 kubenswrapper[7476]: I0320 08:35:20.216130 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 08:35:20.217152 master-0 kubenswrapper[7476]: I0320 08:35:20.217115 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 08:35:20.217152 master-0 kubenswrapper[7476]: I0320 08:35:20.217145 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 08:35:20.217240 master-0 kubenswrapper[7476]: I0320 08:35:20.217223 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 08:35:20.217240 master-0 kubenswrapper[7476]: I0320 08:35:20.216141 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 08:35:20.217439 master-0 kubenswrapper[7476]: I0320 08:35:20.217423 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 08:35:20.217532 master-0 kubenswrapper[7476]: I0320 08:35:20.217483 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 08:35:20.219531 master-0 kubenswrapper[7476]: I0320 08:35:20.219508 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 08:35:20.220788 master-0 kubenswrapper[7476]: I0320 08:35:20.220767 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 08:35:20.225924 master-0 kubenswrapper[7476]: I0320 08:35:20.225757 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 08:35:20.227997 master-0 kubenswrapper[7476]: I0320 08:35:20.227970 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 08:35:20.229692 master-0 kubenswrapper[7476]: I0320 08:35:20.229663 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 20 08:35:20.229760 master-0 kubenswrapper[7476]: I0320 08:35:20.229707 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 08:35:20.246412 master-0 kubenswrapper[7476]: I0320 08:35:20.246369 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 08:35:20.265018 master-0 kubenswrapper[7476]: I0320 08:35:20.264947 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 08:35:20.275322 master-0 kubenswrapper[7476]: I0320 08:35:20.275293 7476 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 20 08:35:20.289561 master-0 kubenswrapper[7476]: I0320 08:35:20.289540 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 08:35:20.305234 master-0 kubenswrapper[7476]: I0320 08:35:20.305214 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 08:35:20.320466 master-0 kubenswrapper[7476]: I0320 08:35:20.320409 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:20.320568 master-0 kubenswrapper[7476]: I0320 08:35:20.320479 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.320568 master-0 kubenswrapper[7476]: I0320 08:35:20.320510 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.320568 master-0 kubenswrapper[7476]: I0320 08:35:20.320535 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:20.320568 master-0 kubenswrapper[7476]: I0320 08:35:20.320559 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.320687 master-0 kubenswrapper[7476]: I0320 08:35:20.320584 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:20.320687 master-0 kubenswrapper[7476]: I0320 08:35:20.320606 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.320687 master-0 kubenswrapper[7476]: I0320 08:35:20.320631 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:20.320687 master-0 kubenswrapper[7476]: I0320 08:35:20.320654 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.320687 master-0 kubenswrapper[7476]: I0320 08:35:20.320675 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:20.320819 master-0 kubenswrapper[7476]: I0320 08:35:20.320696 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.320819 master-0 kubenswrapper[7476]: I0320 08:35:20.320716 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.320819 master-0 kubenswrapper[7476]: I0320 08:35:20.320782 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:20.320819 master-0 kubenswrapper[7476]: I0320 08:35:20.320809 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.320923 master-0 kubenswrapper[7476]: I0320 08:35:20.320829 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.320923 master-0 kubenswrapper[7476]: I0320 08:35:20.320850 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.320923 master-0 kubenswrapper[7476]: I0320 08:35:20.320870 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.320923 master-0 kubenswrapper[7476]: I0320 08:35:20.320894 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:20.320923 master-0 kubenswrapper[7476]: I0320 08:35:20.320913 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.321041 master-0 kubenswrapper[7476]: I0320 08:35:20.320936 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.321041 master-0 kubenswrapper[7476]: I0320 08:35:20.320957 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321041 master-0 kubenswrapper[7476]: I0320 08:35:20.320985 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321041 master-0 kubenswrapper[7476]: I0320 08:35:20.321006 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:20.321041 master-0 kubenswrapper[7476]: I0320 08:35:20.321027 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.321163 master-0 kubenswrapper[7476]: I0320 08:35:20.321052 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.321163 master-0 kubenswrapper[7476]: I0320 08:35:20.321072 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.321163 master-0 kubenswrapper[7476]: I0320 08:35:20.321101 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.321163 master-0 kubenswrapper[7476]: I0320 08:35:20.321121 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.321163 master-0 kubenswrapper[7476]: I0320 08:35:20.321144 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.321311 master-0 kubenswrapper[7476]: I0320 08:35:20.321166 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321311 master-0 kubenswrapper[7476]: I0320 08:35:20.321188 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.321311 master-0 kubenswrapper[7476]: I0320 08:35:20.321230 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:20.321311 master-0 kubenswrapper[7476]: I0320 08:35:20.321252 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.321311 master-0 kubenswrapper[7476]: I0320 08:35:20.321302 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321332 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321352 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321378 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321397 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321419 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321440 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321464 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.321498 master-0 kubenswrapper[7476]: I0320 08:35:20.321485 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321509 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321539 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321567 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321592 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321617 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321641 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321664 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.321696 master-0 kubenswrapper[7476]: I0320 08:35:20.321686 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321716 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321737 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321758 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321780 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321800 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321821 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321844 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321864 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.321896 master-0 kubenswrapper[7476]: I0320 08:35:20.321884 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.321905 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.321926 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.321946 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.321969 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.321989 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.322010 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.322032 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.322054 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.322074 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.322097 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.322131 master-0 kubenswrapper[7476]: I0320 08:35:20.322118 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322141 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322163 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322189 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322211 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322233 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322254 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322308 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322328 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322351 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322372 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322393 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.322414 master-0 kubenswrapper[7476]: I0320 08:35:20.322416 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322438 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322459 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322480 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322502 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322527 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322547 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322568 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322587 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322608 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322630 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322650 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322670 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322692 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322714 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322736 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.322760 master-0 kubenswrapper[7476]: I0320 08:35:20.322756 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322778 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322800 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322822 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322843 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322864 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322886 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322908 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322932 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322953 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322974 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.322995 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323017 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323037 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323059 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323078 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323098 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323117 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323137 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323150 master-0 kubenswrapper[7476]: I0320 08:35:20.323161 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323185 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323207 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323227 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323250 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323288 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323312 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323333 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323353 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323371 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323392 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323414 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323436 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323457 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323479 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323503 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323526 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323549 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323569 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.323633 master-0 kubenswrapper[7476]: I0320 08:35:20.323603 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:20.324098 master-0 kubenswrapper[7476]: I0320 08:35:20.323648 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:20.324098 master-0 kubenswrapper[7476]: I0320 08:35:20.323672 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.324098 master-0 kubenswrapper[7476]: I0320 08:35:20.323694 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.324175 master-0 kubenswrapper[7476]: I0320 08:35:20.324153 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.324628 master-0 kubenswrapper[7476]: I0320 08:35:20.324597 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:20.324815 master-0 kubenswrapper[7476]: I0320 08:35:20.324789 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:20.325228 master-0 kubenswrapper[7476]: I0320 08:35:20.325200 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.325849 master-0 kubenswrapper[7476]: I0320 08:35:20.325820 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.326009 master-0 kubenswrapper[7476]: I0320 08:35:20.325982 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:20.326069 master-0 kubenswrapper[7476]: I0320 08:35:20.326047 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.326178 master-0 kubenswrapper[7476]: I0320 08:35:20.326152 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.326213 master-0 kubenswrapper[7476]: I0320 08:35:20.326193 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:20.326278 master-0 kubenswrapper[7476]: I0320 08:35:20.326240 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:20.326399 master-0 kubenswrapper[7476]: I0320 08:35:20.326375 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.326463 master-0 kubenswrapper[7476]: I0320 08:35:20.326444 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.326497 master-0 kubenswrapper[7476]: I0320 08:35:20.326475 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:20.326579 master-0 kubenswrapper[7476]: I0320 08:35:20.326563 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:20.326607 master-0 kubenswrapper[7476]: I0320 08:35:20.326566 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.326635 master-0 kubenswrapper[7476]: I0320 08:35:20.326624 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.326664 master-0 kubenswrapper[7476]: I0320 08:35:20.326646 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.326793 master-0 kubenswrapper[7476]: I0320 08:35:20.326768 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.326955 master-0 kubenswrapper[7476]: I0320 08:35:20.326940 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:20.326983 master-0 kubenswrapper[7476]: I0320 08:35:20.326964 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:20.327153 master-0 kubenswrapper[7476]: I0320 08:35:20.327137 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.327201 master-0 kubenswrapper[7476]: I0320 08:35:20.327181 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.327252 master-0 kubenswrapper[7476]: I0320 08:35:20.327226 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.327370 master-0 kubenswrapper[7476]: I0320 08:35:20.327355 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.327474 master-0 kubenswrapper[7476]: I0320 08:35:20.327449 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:20.327665 master-0 kubenswrapper[7476]: I0320 08:35:20.327641 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:20.327696 master-0 kubenswrapper[7476]: I0320 08:35:20.327670 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.327725 master-0 kubenswrapper[7476]: I0320 08:35:20.327699 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.327895 master-0 kubenswrapper[7476]: I0320 08:35:20.327873 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.328187 master-0 kubenswrapper[7476]: I0320 08:35:20.328153 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.328222 master-0 kubenswrapper[7476]: I0320 08:35:20.328187 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.328314 master-0 kubenswrapper[7476]: I0320 08:35:20.328238 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:20.328314 master-0 kubenswrapper[7476]: I0320 08:35:20.328308 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.328406 master-0 kubenswrapper[7476]: I0320 08:35:20.328377 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:20.328495 master-0 kubenswrapper[7476]: I0320 08:35:20.328463 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:20.328538 master-0 kubenswrapper[7476]: I0320 08:35:20.327461 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.328633 master-0 kubenswrapper[7476]: I0320 08:35:20.328608 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:20.328667 master-0 kubenswrapper[7476]: I0320 08:35:20.328657 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:20.328835 master-0 kubenswrapper[7476]: I0320 08:35:20.328803 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.328868 master-0 kubenswrapper[7476]: I0320 08:35:20.328843 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:20.328868 master-0 kubenswrapper[7476]: I0320 08:35:20.328853 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.329024 master-0 kubenswrapper[7476]: I0320 08:35:20.329001 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.329055 master-0 kubenswrapper[7476]: I0320 08:35:20.329031 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.329106 master-0 kubenswrapper[7476]: I0320 08:35:20.329086 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.329147 master-0 kubenswrapper[7476]: I0320 08:35:20.329129 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.329357 master-0 kubenswrapper[7476]: I0320 08:35:20.329331 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:20.329537 master-0 kubenswrapper[7476]: I0320 08:35:20.329516 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:20.329572 master-0 kubenswrapper[7476]: I0320 08:35:20.325645 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.331812 master-0 kubenswrapper[7476]: I0320 08:35:20.331784 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 08:35:20.335713 master-0 kubenswrapper[7476]: I0320 08:35:20.335688 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.345821 master-0 kubenswrapper[7476]: I0320 08:35:20.345797 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 08:35:20.347163 master-0 kubenswrapper[7476]: I0320 08:35:20.347125 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.366340 master-0 kubenswrapper[7476]: I0320 08:35:20.366253 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 08:35:20.369020 master-0 kubenswrapper[7476]: I0320 08:35:20.368975 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.385692 master-0 kubenswrapper[7476]: I0320 08:35:20.385651 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 20 08:35:20.389616 master-0 kubenswrapper[7476]: I0320 08:35:20.389584 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.405327 master-0 kubenswrapper[7476]: I0320 08:35:20.405299 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425499 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425537 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425555 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425571 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425586 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425601 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.425617 master-0 kubenswrapper[7476]: I0320 08:35:20.425615 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.425932 master-0 kubenswrapper[7476]: I0320 08:35:20.425634 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.425932 master-0 kubenswrapper[7476]: I0320 08:35:20.425656 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.425932 master-0 kubenswrapper[7476]: I0320 08:35:20.425672 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.425932 master-0 kubenswrapper[7476]: I0320 08:35:20.425698 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.425932 master-0 kubenswrapper[7476]: I0320 08:35:20.425874 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.425932 master-0 kubenswrapper[7476]: I0320 08:35:20.425893 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: I0320 08:35:20.425942 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: I0320 08:35:20.425958 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: I0320 08:35:20.425979 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: I0320 08:35:20.426019 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: I0320 08:35:20.426026 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: E0320 08:35:20.426061 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:20.426080 master-0 kubenswrapper[7476]: E0320 08:35:20.426083 7476 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: E0320 08:35:20.426108 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.92609366 +0000 UTC m=+1.894862186 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: E0320 08:35:20.426119 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.926114 +0000 UTC m=+1.894882526 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426140 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426140 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: E0320 08:35:20.426146 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426163 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426155 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426043 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426192 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.425982 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: E0320 08:35:20.426203 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.926183682 +0000 UTC m=+1.894952238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426219 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426240 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.426253 master-0 kubenswrapper[7476]: I0320 08:35:20.426280 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426244 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426348 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426363 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.425963 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426395 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: E0320 08:35:20.426411 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: E0320 08:35:20.426432 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.926424668 +0000 UTC m=+1.895193194 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: E0320 08:35:20.426448 7476 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426465 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: E0320 08:35:20.426473 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.926464459 +0000 UTC m=+1.895232985 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426522 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426556 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426620 master-0 kubenswrapper[7476]: I0320 08:35:20.426597 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: I0320 08:35:20.426648 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: I0320 08:35:20.426658 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: I0320 08:35:20.426823 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: I0320 08:35:20.426849 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: I0320 08:35:20.426872 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: E0320 08:35:20.426866 7476 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:20.426937 master-0 kubenswrapper[7476]: E0320 08:35:20.426929 7476 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: E0320 08:35:20.426953 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.926946112 +0000 UTC m=+1.895714638 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: E0320 08:35:20.426991 7476 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: E0320 08:35:20.427001 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.926973132 +0000 UTC m=+1.895741708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: I0320 08:35:20.426887 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: E0320 08:35:20.426985 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: E0320 08:35:20.427031 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.927018754 +0000 UTC m=+1.895787380 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: I0320 08:35:20.427025 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427103 master-0 kubenswrapper[7476]: I0320 08:35:20.427076 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427130 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427140 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427164 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: E0320 08:35:20.427190 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.927181578 +0000 UTC m=+1.895950114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427235 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427238 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427256 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:20.427323 master-0 kubenswrapper[7476]: I0320 08:35:20.427323 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: E0320 08:35:20.427346 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: I0320 08:35:20.427365 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: E0320 08:35:20.427380 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.927369633 +0000 UTC m=+1.896138259 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: I0320 08:35:20.427346 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: I0320 08:35:20.427433 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: I0320 08:35:20.427389 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: I0320 08:35:20.427467 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427527 master-0 kubenswrapper[7476]: I0320 08:35:20.427508 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: I0320 08:35:20.427532 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: I0320 08:35:20.427558 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: I0320 08:35:20.427510 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: I0320 08:35:20.427577 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: E0320 08:35:20.427673 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: E0320 08:35:20.427698 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.927690111 +0000 UTC m=+1.896458637 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:20.427725 master-0 kubenswrapper[7476]: I0320 08:35:20.427682 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.427906 master-0 kubenswrapper[7476]: E0320 08:35:20.427732 7476 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:20.427906 master-0 kubenswrapper[7476]: E0320 08:35:20.427750 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.927744682 +0000 UTC m=+1.896513198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:20.427906 master-0 kubenswrapper[7476]: I0320 08:35:20.427790 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.427906 master-0 kubenswrapper[7476]: I0320 08:35:20.427844 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:20.427906 master-0 kubenswrapper[7476]: I0320 08:35:20.427880 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: E0320 08:35:20.427919 7476 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.427919 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: E0320 08:35:20.427942 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.927935147 +0000 UTC m=+1.896703663 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.427957 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.427975 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.427884 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428017 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.427980 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428030 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428041 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428059 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428068 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428081 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428101 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428123 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.428133 master-0 kubenswrapper[7476]: I0320 08:35:20.428144 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428173 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428198 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428213 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428228 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428298 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428301 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428318 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: E0320 08:35:20.428353 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: E0320 08:35:20.428372 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.928365109 +0000 UTC m=+1.897133635 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428387 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428405 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: I0320 08:35:20.428442 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: E0320 08:35:20.428510 7476 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:20.428666 master-0 kubenswrapper[7476]: E0320 08:35:20.428610 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:20.928592075 +0000 UTC m=+1.897360621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:20.451516 master-0 kubenswrapper[7476]: E0320 08:35:20.451429 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:20.479534 master-0 kubenswrapper[7476]: I0320 08:35:20.479488 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.496619 master-0 kubenswrapper[7476]: I0320 08:35:20.496575 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:20.515235 master-0 kubenswrapper[7476]: I0320 08:35:20.515200 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:35:20.535942 master-0 kubenswrapper[7476]: I0320 08:35:20.535908 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:20.556511 master-0 kubenswrapper[7476]: I0320 08:35:20.556471 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.576464 master-0 kubenswrapper[7476]: I0320 08:35:20.576424 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:35:20.596353 master-0 kubenswrapper[7476]: I0320 08:35:20.595844 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.618608 master-0 kubenswrapper[7476]: I0320 08:35:20.618548 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:20.639897 master-0 kubenswrapper[7476]: I0320 08:35:20.639825 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:35:20.656548 master-0 kubenswrapper[7476]: I0320 08:35:20.656505 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.679951 master-0 kubenswrapper[7476]: I0320 08:35:20.679804 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:35:20.695063 master-0 kubenswrapper[7476]: I0320 08:35:20.695018 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.714471 master-0 kubenswrapper[7476]: I0320 08:35:20.714398 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:35:20.736074 master-0 kubenswrapper[7476]: I0320 08:35:20.736015 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:35:20.759764 master-0 kubenswrapper[7476]: I0320 08:35:20.759719 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:35:20.788205 master-0 kubenswrapper[7476]: I0320 08:35:20.788170 7476 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:35:20.789899 master-0 kubenswrapper[7476]: I0320 08:35:20.789857 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:20.798055 master-0 kubenswrapper[7476]: I0320 08:35:20.797740 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:35:20.816629 master-0 kubenswrapper[7476]: I0320 08:35:20.816563 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:35:20.838250 master-0 kubenswrapper[7476]: I0320 08:35:20.838195 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:35:20.856098 master-0 kubenswrapper[7476]: I0320 08:35:20.856056 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:20.879707 master-0 kubenswrapper[7476]: I0320 08:35:20.879653 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:35:20.899063 master-0 kubenswrapper[7476]: I0320 08:35:20.899024 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:35:20.918844 master-0 kubenswrapper[7476]: I0320 08:35:20.918776 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:35:20.935704 master-0 kubenswrapper[7476]: I0320 08:35:20.935593 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:20.935704 master-0 kubenswrapper[7476]: I0320 08:35:20.935668 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.935704 master-0 kubenswrapper[7476]: I0320 08:35:20.935700 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:20.935985 master-0 kubenswrapper[7476]: E0320 08:35:20.935886 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:20.936032 master-0 kubenswrapper[7476]: E0320 08:35:20.935984 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.935961397 +0000 UTC m=+2.904729943 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:20.936376 master-0 kubenswrapper[7476]: I0320 08:35:20.936148 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.936376 master-0 kubenswrapper[7476]: I0320 08:35:20.936211 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:20.936376 master-0 kubenswrapper[7476]: I0320 08:35:20.936295 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:20.936376 master-0 kubenswrapper[7476]: E0320 08:35:20.936295 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:20.936376 master-0 kubenswrapper[7476]: E0320 08:35:20.936350 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.936339307 +0000 UTC m=+2.905107833 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:20.936692 master-0 kubenswrapper[7476]: E0320 08:35:20.936436 7476 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:20.936692 master-0 kubenswrapper[7476]: E0320 08:35:20.936548 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.936521612 +0000 UTC m=+2.905290168 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:20.936692 master-0 kubenswrapper[7476]: E0320 08:35:20.936616 7476 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:20.936692 master-0 kubenswrapper[7476]: E0320 08:35:20.936661 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.936644405 +0000 UTC m=+2.905412971 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:20.936842 master-0 kubenswrapper[7476]: E0320 08:35:20.936715 7476 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:20.936842 master-0 kubenswrapper[7476]: E0320 08:35:20.936749 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.936737307 +0000 UTC m=+2.905505863 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:20.936842 master-0 kubenswrapper[7476]: I0320 08:35:20.936795 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:20.936940 master-0 kubenswrapper[7476]: I0320 08:35:20.936869 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:20.936940 master-0 kubenswrapper[7476]: I0320 08:35:20.936919 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:20.937011 master-0 kubenswrapper[7476]: I0320 08:35:20.936957 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:20.937011 master-0 kubenswrapper[7476]: I0320 08:35:20.936996 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:20.937085 master-0 kubenswrapper[7476]: I0320 08:35:20.937035 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:20.937085 master-0 kubenswrapper[7476]: I0320 08:35:20.937078 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937203 7476 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937288 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.937279031 +0000 UTC m=+2.906047557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937290 7476 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937335 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.937321742 +0000 UTC m=+2.906090298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937341 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937410 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.937402674 +0000 UTC m=+2.906171200 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937414 7476 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: I0320 08:35:20.937433 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937452 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.937439795 +0000 UTC m=+2.906208351 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937481 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937502 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.937494926 +0000 UTC m=+2.906263452 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: I0320 08:35:20.937516 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937550 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937587 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.937572648 +0000 UTC m=+2.906341204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937609 7476 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:20.937663 master-0 kubenswrapper[7476]: E0320 08:35:20.937634 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.93762784 +0000 UTC m=+2.906396356 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:20.938227 master-0 kubenswrapper[7476]: E0320 08:35:20.937659 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:20.938227 master-0 kubenswrapper[7476]: I0320 08:35:20.938089 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:35:20.938227 master-0 kubenswrapper[7476]: E0320 08:35:20.937720 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:20.938351 master-0 kubenswrapper[7476]: E0320 08:35:20.937773 7476 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:20.938351 master-0 kubenswrapper[7476]: E0320 08:35:20.938211 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.938199795 +0000 UTC m=+2.906968431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:20.938351 master-0 kubenswrapper[7476]: E0320 08:35:20.938331 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.938314268 +0000 UTC m=+2.907082834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:20.938446 master-0 kubenswrapper[7476]: E0320 08:35:20.938360 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:21.938349809 +0000 UTC m=+2.907118375 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:20.967014 master-0 kubenswrapper[7476]: I0320 08:35:20.966939 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:35:20.980490 master-0 kubenswrapper[7476]: I0320 08:35:20.980439 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:35:20.997036 master-0 kubenswrapper[7476]: I0320 08:35:20.996992 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:21.017074 master-0 kubenswrapper[7476]: I0320 08:35:21.017032 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:21.037138 master-0 kubenswrapper[7476]: I0320 08:35:21.037103 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:21.056069 master-0 kubenswrapper[7476]: I0320 08:35:21.056020 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:21.065632 master-0 kubenswrapper[7476]: I0320 08:35:21.065579 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:21.079770 master-0 kubenswrapper[7476]: I0320 08:35:21.079447 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:21.093617 master-0 kubenswrapper[7476]: I0320 08:35:21.093555 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:21.097824 master-0 kubenswrapper[7476]: I0320 08:35:21.097772 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:21.116215 master-0 kubenswrapper[7476]: I0320 08:35:21.116152 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:21.136419 master-0 kubenswrapper[7476]: I0320 08:35:21.136365 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:35:21.160252 master-0 kubenswrapper[7476]: I0320 08:35:21.160185 7476 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 08:35:21.165559 master-0 kubenswrapper[7476]: I0320 08:35:21.165519 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:21.296683 master-0 kubenswrapper[7476]: E0320 08:35:21.296432 7476 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55" Mar 20 08:35:21.297054 master-0 kubenswrapper[7476]: E0320 08:35:21.296802 7476 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dx99f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-9xlf2_openshift-network-operator(b097596e-79e1-44d1-be8a-96340042a041): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 08:35:21.298070 master-0 kubenswrapper[7476]: E0320 08:35:21.298024 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-9xlf2" podUID="b097596e-79e1-44d1-be8a-96340042a041" Mar 20 08:35:21.399548 master-0 kubenswrapper[7476]: I0320 08:35:21.399488 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:21.427722 master-0 kubenswrapper[7476]: I0320 08:35:21.427657 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:21.466445 master-0 kubenswrapper[7476]: I0320 08:35:21.466380 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:21.941942 master-0 kubenswrapper[7476]: E0320 08:35:21.941835 7476 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302" Mar 20 08:35:21.942220 master-0 kubenswrapper[7476]: E0320 08:35:21.942112 7476 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-scheduler-operator-container,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302,Command:[cluster-kube-scheduler-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.35,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-kube-scheduler-operator-dddff6458-vmwqt_openshift-kube-scheduler-operator(65157a9b-3df7-4cc1-a85a-a5dfa59921ad): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 08:35:21.943406 master-0 kubenswrapper[7476]: E0320 08:35:21.943355 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler-operator-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" podUID="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" Mar 20 08:35:21.948719 master-0 kubenswrapper[7476]: I0320 08:35:21.948641 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:21.948859 master-0 kubenswrapper[7476]: I0320 08:35:21.948743 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:21.948859 master-0 kubenswrapper[7476]: I0320 08:35:21.948792 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:21.948859 master-0 kubenswrapper[7476]: I0320 08:35:21.948844 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:21.949321 master-0 kubenswrapper[7476]: I0320 08:35:21.948884 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:21.949321 master-0 kubenswrapper[7476]: I0320 08:35:21.949092 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:21.949321 master-0 kubenswrapper[7476]: I0320 08:35:21.949201 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949329 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949364 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949399 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949439 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949474 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949511 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949557 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:21.949635 master-0 kubenswrapper[7476]: I0320 08:35:21.949592 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.949761 7476 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.949829 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.94980751 +0000 UTC m=+4.918576076 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.949904 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.949941 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.949927423 +0000 UTC m=+4.918695989 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950000 7476 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: I0320 08:35:21.950003 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950113 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950091367 +0000 UTC m=+4.918859933 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950186 7476 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950256 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950233061 +0000 UTC m=+4.919001657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950300 7476 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950343 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950330943 +0000 UTC m=+4.919099469 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950350 7476 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950356 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950383 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950400 7476 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950400 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950390335 +0000 UTC m=+4.919158931 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950431 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950362 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950451 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950438616 +0000 UTC m=+4.919207232 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950471 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950463727 +0000 UTC m=+4.919232353 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950483 7476 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950496 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950485127 +0000 UTC m=+4.919253753 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950510 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950503758 +0000 UTC m=+4.919272384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950516 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950525 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950517068 +0000 UTC m=+4.919285694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950529 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950540 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950533278 +0000 UTC m=+4.919301904 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950554 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950546819 +0000 UTC m=+4.919315345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950567 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.950560879 +0000 UTC m=+4.919329415 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:21.950535 master-0 kubenswrapper[7476]: E0320 08:35:21.950584 7476 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:21.952643 master-0 kubenswrapper[7476]: E0320 08:35:21.950610 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:23.95060163 +0000 UTC m=+4.919370156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:21.962715 master-0 kubenswrapper[7476]: I0320 08:35:21.955732 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:22.030463 master-0 kubenswrapper[7476]: I0320 08:35:22.030041 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:22.196331 master-0 kubenswrapper[7476]: I0320 08:35:22.195296 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-j9jjm"] Mar 20 08:35:22.313070 master-0 kubenswrapper[7476]: I0320 08:35:22.313012 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-j9jjm" event={"ID":"ca6e644f-c53b-41dd-a16f-9fb9997533dd","Type":"ContainerStarted","Data":"8278eeebf68b018edbef1798293f552dd9859c6fa057a3f48528a25426e7abf3"} Mar 20 08:35:22.315824 master-0 kubenswrapper[7476]: I0320 08:35:22.315796 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerStarted","Data":"6a9d899a8eb10974cc6c4342f48d72d6fc952b94defbc645eeeae9b0a3d84f6a"} Mar 20 08:35:22.344001 master-0 kubenswrapper[7476]: I0320 08:35:22.340653 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerStarted","Data":"d29e51560cf0f82adb12fe4ccbcc9c856b09e06e9a9c7dd6333b272f62625fb3"} Mar 20 08:35:22.344001 master-0 kubenswrapper[7476]: I0320 08:35:22.340952 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:23.328828 master-0 kubenswrapper[7476]: I0320 08:35:23.328511 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67"] Mar 20 08:35:23.329513 master-0 kubenswrapper[7476]: E0320 08:35:23.329250 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:35:23.329513 master-0 kubenswrapper[7476]: I0320 08:35:23.329292 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:35:23.329513 master-0 kubenswrapper[7476]: E0320 08:35:23.329321 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31e4700c-9389-427e-95ef-187f80c9e607" containerName="prober" Mar 20 08:35:23.333003 master-0 kubenswrapper[7476]: I0320 08:35:23.332928 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="31e4700c-9389-427e-95ef-187f80c9e607" containerName="prober" Mar 20 08:35:23.333229 master-0 kubenswrapper[7476]: I0320 08:35:23.333209 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="31e4700c-9389-427e-95ef-187f80c9e607" containerName="prober" Mar 20 08:35:23.333293 master-0 kubenswrapper[7476]: I0320 08:35:23.333232 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:35:23.333705 master-0 kubenswrapper[7476]: I0320 08:35:23.333681 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:35:23.341946 master-0 kubenswrapper[7476]: I0320 08:35:23.341653 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67"] Mar 20 08:35:23.344084 master-0 kubenswrapper[7476]: I0320 08:35:23.344040 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerStarted","Data":"38ba09231d63afd93a0205a5845a80e4d47fa8290768d886cf1c7ea448f682d8"} Mar 20 08:35:23.346395 master-0 kubenswrapper[7476]: I0320 08:35:23.346326 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerStarted","Data":"d20a1459d97b6e06a1f2acdb938648d68b1fc12871ed4ca115c971b404c404f0"} Mar 20 08:35:23.348667 master-0 kubenswrapper[7476]: I0320 08:35:23.348629 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerStarted","Data":"546f50582d27b9704d91a180b620a54d25d194d6d958c834e126f15276d2a186"} Mar 20 08:35:23.354024 master-0 kubenswrapper[7476]: I0320 08:35:23.353967 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-j9jjm" event={"ID":"ca6e644f-c53b-41dd-a16f-9fb9997533dd","Type":"ContainerStarted","Data":"5c9558f1b9a116ee4941ce1e0ca288d98a890cf0f944820cb48b49066ed51f6e"} Mar 20 08:35:23.358134 master-0 kubenswrapper[7476]: I0320 08:35:23.358067 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerStarted","Data":"254f8acc157dece685517f93e40a5d981d3cb093e1a077345ec886e180445eaa"} Mar 20 08:35:23.359973 master-0 kubenswrapper[7476]: I0320 08:35:23.359926 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerStarted","Data":"606e62ca34e3d9e1001d8f531baa40a69abd238341d65870685ec9240a1791b0"} Mar 20 08:35:23.364974 master-0 kubenswrapper[7476]: I0320 08:35:23.364924 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerStarted","Data":"3fbcbabe96d1d538208df7fe6740297e7b936fd21409b810c6def759b3cb8301"} Mar 20 08:35:23.380367 master-0 kubenswrapper[7476]: I0320 08:35:23.379760 7476 generic.go:334] "Generic (PLEG): container finished" podID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerID="79d1a04f16780a30204d3fb5aa6261f513e7c954544e8ecbd91d389cc77dbe03" exitCode=0 Mar 20 08:35:23.380367 master-0 kubenswrapper[7476]: I0320 08:35:23.379888 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerDied","Data":"79d1a04f16780a30204d3fb5aa6261f513e7c954544e8ecbd91d389cc77dbe03"} Mar 20 08:35:23.391525 master-0 kubenswrapper[7476]: I0320 08:35:23.391471 7476 generic.go:334] "Generic (PLEG): container finished" podID="e9425526-9f51-4302-a19d-a8107f56c582" containerID="903bd12c687f6625987bd7d1e46b200fb44d1a9e193c70ad2441cab58febeed2" exitCode=0 Mar 20 08:35:23.391712 master-0 kubenswrapper[7476]: I0320 08:35:23.391645 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:23.391744 master-0 kubenswrapper[7476]: I0320 08:35:23.391705 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerDied","Data":"903bd12c687f6625987bd7d1e46b200fb44d1a9e193c70ad2441cab58febeed2"} Mar 20 08:35:23.470863 master-0 kubenswrapper[7476]: I0320 08:35:23.468832 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqgkl\" (UniqueName: \"kubernetes.io/projected/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9-kube-api-access-rqgkl\") pod \"csi-snapshot-controller-64854d9cff-gng67\" (UID: \"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:35:23.571076 master-0 kubenswrapper[7476]: I0320 08:35:23.570618 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgkl\" (UniqueName: \"kubernetes.io/projected/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9-kube-api-access-rqgkl\") pod \"csi-snapshot-controller-64854d9cff-gng67\" (UID: \"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:35:23.616839 master-0 kubenswrapper[7476]: I0320 08:35:23.616785 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgkl\" (UniqueName: \"kubernetes.io/projected/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9-kube-api-access-rqgkl\") pod \"csi-snapshot-controller-64854d9cff-gng67\" (UID: \"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:35:23.653291 master-0 kubenswrapper[7476]: I0320 08:35:23.653213 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975767 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975822 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975843 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975865 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975905 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975924 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975943 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975963 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975980 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.975998 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.976017 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.976035 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.976061 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.976084 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: I0320 08:35:23.976101 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: E0320 08:35:23.976198 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:23.976587 master-0 kubenswrapper[7476]: E0320 08:35:23.976245 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.976231201 +0000 UTC m=+8.944999727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976773 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976899 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976910 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.976872958 +0000 UTC m=+8.945641524 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976952 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.976932149 +0000 UTC m=+8.945700805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976985 7476 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976991 7476 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977046 7476 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977049 7476 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977007 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.976998791 +0000 UTC m=+8.945767317 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977115 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977121 7476 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977128 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977107044 +0000 UTC m=+8.945875700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977154 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977148155 +0000 UTC m=+8.945916681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977166 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977160075 +0000 UTC m=+8.945928601 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977177 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977171695 +0000 UTC m=+8.945940221 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977188 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977182366 +0000 UTC m=+8.945950892 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976805 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977256 7476 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977322 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977293958 +0000 UTC m=+8.946062524 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.976908 7476 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977348 7476 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977359 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.97734191 +0000 UTC m=+8.946110586 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977095 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977389 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977375771 +0000 UTC m=+8.946144457 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977392 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977418 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977404291 +0000 UTC m=+8.946172967 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977453 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977435402 +0000 UTC m=+8.946204058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:23.977756 master-0 kubenswrapper[7476]: E0320 08:35:23.977481 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:27.977468083 +0000 UTC m=+8.946236759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:24.010984 master-0 kubenswrapper[7476]: I0320 08:35:24.010893 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67"] Mar 20 08:35:24.022359 master-0 kubenswrapper[7476]: W0320 08:35:24.022291 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2a3df6e_e327_4e97_b8f0_f2d6cdd1e5f9.slice/crio-ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d WatchSource:0}: Error finding container ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d: Status 404 returned error can't find the container with id ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d Mar 20 08:35:24.285082 master-0 kubenswrapper[7476]: I0320 08:35:24.284603 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p"] Mar 20 08:35:24.288310 master-0 kubenswrapper[7476]: I0320 08:35:24.287204 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:35:24.290305 master-0 kubenswrapper[7476]: I0320 08:35:24.290201 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p"] Mar 20 08:35:24.291404 master-0 kubenswrapper[7476]: I0320 08:35:24.290832 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 08:35:24.291671 master-0 kubenswrapper[7476]: I0320 08:35:24.291618 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 08:35:24.384004 master-0 kubenswrapper[7476]: I0320 08:35:24.383929 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bxn6\" (UniqueName: \"kubernetes.io/projected/890a6c24-1dbb-4331-952b-5712ac00788e-kube-api-access-7bxn6\") pod \"migrator-8487694857-ltk2p\" (UID: \"890a6c24-1dbb-4331-952b-5712ac00788e\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:35:24.395672 master-0 kubenswrapper[7476]: I0320 08:35:24.395639 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d"} Mar 20 08:35:24.395788 master-0 kubenswrapper[7476]: I0320 08:35:24.395684 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:24.443437 master-0 kubenswrapper[7476]: I0320 08:35:24.443296 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-z9lwr"] Mar 20 08:35:24.444107 master-0 kubenswrapper[7476]: I0320 08:35:24.444051 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.447840 master-0 kubenswrapper[7476]: I0320 08:35:24.446069 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:35:24.447840 master-0 kubenswrapper[7476]: I0320 08:35:24.446218 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:35:24.447840 master-0 kubenswrapper[7476]: I0320 08:35:24.446252 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:35:24.447840 master-0 kubenswrapper[7476]: I0320 08:35:24.446372 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:35:24.447840 master-0 kubenswrapper[7476]: I0320 08:35:24.446516 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:35:24.447840 master-0 kubenswrapper[7476]: I0320 08:35:24.446782 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:35:24.456436 master-0 kubenswrapper[7476]: I0320 08:35:24.455515 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-z9lwr"] Mar 20 08:35:24.485208 master-0 kubenswrapper[7476]: I0320 08:35:24.485137 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxn6\" (UniqueName: \"kubernetes.io/projected/890a6c24-1dbb-4331-952b-5712ac00788e-kube-api-access-7bxn6\") pod \"migrator-8487694857-ltk2p\" (UID: \"890a6c24-1dbb-4331-952b-5712ac00788e\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:35:24.504180 master-0 kubenswrapper[7476]: I0320 08:35:24.504118 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bxn6\" (UniqueName: \"kubernetes.io/projected/890a6c24-1dbb-4331-952b-5712ac00788e-kube-api-access-7bxn6\") pod \"migrator-8487694857-ltk2p\" (UID: \"890a6c24-1dbb-4331-952b-5712ac00788e\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:35:24.587092 master-0 kubenswrapper[7476]: I0320 08:35:24.586724 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.587092 master-0 kubenswrapper[7476]: I0320 08:35:24.587058 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.587419 master-0 kubenswrapper[7476]: I0320 08:35:24.587159 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpfsh\" (UniqueName: \"kubernetes.io/projected/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-kube-api-access-mpfsh\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.587419 master-0 kubenswrapper[7476]: I0320 08:35:24.587251 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.587419 master-0 kubenswrapper[7476]: I0320 08:35:24.587351 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.618044 master-0 kubenswrapper[7476]: I0320 08:35:24.617965 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:35:24.688426 master-0 kubenswrapper[7476]: I0320 08:35:24.688228 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.688426 master-0 kubenswrapper[7476]: I0320 08:35:24.688338 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.688426 master-0 kubenswrapper[7476]: E0320 08:35:24.688367 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 20 08:35:24.688426 master-0 kubenswrapper[7476]: I0320 08:35:24.688419 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpfsh\" (UniqueName: \"kubernetes.io/projected/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-kube-api-access-mpfsh\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: E0320 08:35:24.688448 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.188430956 +0000 UTC m=+6.157199482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "config" not found Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: E0320 08:35:24.688486 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: I0320 08:35:24.688520 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: E0320 08:35:24.688551 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.188531418 +0000 UTC m=+6.157300064 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "client-ca" not found Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: E0320 08:35:24.688563 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: E0320 08:35:24.688584 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.188578249 +0000 UTC m=+6.157346775 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "openshift-global-ca" not found Mar 20 08:35:24.688679 master-0 kubenswrapper[7476]: I0320 08:35:24.688595 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.689057 master-0 kubenswrapper[7476]: E0320 08:35:24.688984 7476 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:24.689130 master-0 kubenswrapper[7476]: E0320 08:35:24.689104 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.189069103 +0000 UTC m=+6.157837669 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : secret "serving-cert" not found Mar 20 08:35:24.724433 master-0 kubenswrapper[7476]: I0320 08:35:24.724397 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpfsh\" (UniqueName: \"kubernetes.io/projected/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-kube-api-access-mpfsh\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:24.760315 master-0 kubenswrapper[7476]: I0320 08:35:24.760252 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p"] Mar 20 08:35:24.767192 master-0 kubenswrapper[7476]: W0320 08:35:24.767145 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod890a6c24_1dbb_4331_952b_5712ac00788e.slice/crio-3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28 WatchSource:0}: Error finding container 3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28: Status 404 returned error can't find the container with id 3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28 Mar 20 08:35:24.934058 master-0 kubenswrapper[7476]: I0320 08:35:24.934014 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:24.940256 master-0 kubenswrapper[7476]: I0320 08:35:24.940217 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:35:25.066004 master-0 kubenswrapper[7476]: I0320 08:35:25.065952 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v"] Mar 20 08:35:25.066911 master-0 kubenswrapper[7476]: I0320 08:35:25.066876 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.068403 master-0 kubenswrapper[7476]: I0320 08:35:25.068335 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:35:25.071420 master-0 kubenswrapper[7476]: I0320 08:35:25.071384 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:35:25.071872 master-0 kubenswrapper[7476]: I0320 08:35:25.071838 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:35:25.073241 master-0 kubenswrapper[7476]: I0320 08:35:25.073212 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:35:25.074239 master-0 kubenswrapper[7476]: I0320 08:35:25.073432 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:35:25.085145 master-0 kubenswrapper[7476]: I0320 08:35:25.083880 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v"] Mar 20 08:35:25.178126 master-0 kubenswrapper[7476]: I0320 08:35:25.178070 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:25.178379 master-0 kubenswrapper[7476]: I0320 08:35:25.178304 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:25.182648 master-0 kubenswrapper[7476]: I0320 08:35:25.182614 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:25.226547 master-0 kubenswrapper[7476]: I0320 08:35:25.226428 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:25.226707 master-0 kubenswrapper[7476]: E0320 08:35:25.226555 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:25.226707 master-0 kubenswrapper[7476]: I0320 08:35:25.226565 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.226707 master-0 kubenswrapper[7476]: I0320 08:35:25.226606 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.226707 master-0 kubenswrapper[7476]: E0320 08:35:25.226624 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.226605876 +0000 UTC m=+7.195374402 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "client-ca" not found Mar 20 08:35:25.226890 master-0 kubenswrapper[7476]: I0320 08:35:25.226716 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjrvg\" (UniqueName: \"kubernetes.io/projected/9dd14fb1-f122-42c0-a253-606691936519-kube-api-access-hjrvg\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.226890 master-0 kubenswrapper[7476]: I0320 08:35:25.226824 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:25.226890 master-0 kubenswrapper[7476]: I0320 08:35:25.226857 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: E0320 08:35:25.226894 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: I0320 08:35:25.226905 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: E0320 08:35:25.226928 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.226919294 +0000 UTC m=+7.195687820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "openshift-global-ca" not found Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: E0320 08:35:25.227055 7476 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: E0320 08:35:25.227103 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: I0320 08:35:25.227074 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:25.227170 master-0 kubenswrapper[7476]: E0320 08:35:25.227136 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.22711907 +0000 UTC m=+7.195887606 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : secret "serving-cert" not found Mar 20 08:35:25.227454 master-0 kubenswrapper[7476]: E0320 08:35:25.227193 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.227147801 +0000 UTC m=+7.195916337 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "config" not found Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: I0320 08:35:25.328434 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: E0320 08:35:25.328667 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: E0320 08:35:25.328758 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.82873965 +0000 UTC m=+6.797508296 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "config" not found Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: I0320 08:35:25.328845 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: I0320 08:35:25.328880 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: I0320 08:35:25.328913 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjrvg\" (UniqueName: \"kubernetes.io/projected/9dd14fb1-f122-42c0-a253-606691936519-kube-api-access-hjrvg\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: E0320 08:35:25.328996 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: E0320 08:35:25.329044 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.829030827 +0000 UTC m=+6.797799353 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : secret "serving-cert" not found Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: E0320 08:35:25.329103 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:25.329219 master-0 kubenswrapper[7476]: E0320 08:35:25.329153 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:25.82913801 +0000 UTC m=+6.797906536 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "client-ca" not found Mar 20 08:35:25.348147 master-0 kubenswrapper[7476]: I0320 08:35:25.348108 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjrvg\" (UniqueName: \"kubernetes.io/projected/9dd14fb1-f122-42c0-a253-606691936519-kube-api-access-hjrvg\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.400165 master-0 kubenswrapper[7476]: I0320 08:35:25.400123 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:25.401006 master-0 kubenswrapper[7476]: I0320 08:35:25.400783 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" event={"ID":"890a6c24-1dbb-4331-952b-5712ac00788e","Type":"ContainerStarted","Data":"3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28"} Mar 20 08:35:25.825140 master-0 kubenswrapper[7476]: I0320 08:35:25.825050 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:25.828869 master-0 kubenswrapper[7476]: I0320 08:35:25.828836 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:25.839008 master-0 kubenswrapper[7476]: I0320 08:35:25.838948 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.839199 master-0 kubenswrapper[7476]: I0320 08:35:25.839162 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.839301 master-0 kubenswrapper[7476]: E0320 08:35:25.839191 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:25.839301 master-0 kubenswrapper[7476]: I0320 08:35:25.839247 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:25.839470 master-0 kubenswrapper[7476]: E0320 08:35:25.839305 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.839256384 +0000 UTC m=+7.808024950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : secret "serving-cert" not found Mar 20 08:35:25.839470 master-0 kubenswrapper[7476]: E0320 08:35:25.839350 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:25.839470 master-0 kubenswrapper[7476]: E0320 08:35:25.839390 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 20 08:35:25.839470 master-0 kubenswrapper[7476]: E0320 08:35:25.839424 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.839401688 +0000 UTC m=+7.808170294 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "client-ca" not found Mar 20 08:35:25.839470 master-0 kubenswrapper[7476]: E0320 08:35:25.839448 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:26.839437859 +0000 UTC m=+7.808206515 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "config" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: I0320 08:35:26.248468 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: I0320 08:35:26.248535 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: I0320 08:35:26.248594 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: I0320 08:35:26.248620 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248771 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248816 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.248802864 +0000 UTC m=+9.217571380 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "client-ca" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248866 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248884 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.248879016 +0000 UTC m=+9.217647542 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "openshift-global-ca" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248927 7476 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248947 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.248940808 +0000 UTC m=+9.217709324 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : secret "serving-cert" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248968 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 20 08:35:26.252296 master-0 kubenswrapper[7476]: E0320 08:35:26.248984 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.248980259 +0000 UTC m=+9.217748785 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "config" not found Mar 20 08:35:26.403253 master-0 kubenswrapper[7476]: I0320 08:35:26.403213 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:26.886128 master-0 kubenswrapper[7476]: I0320 08:35:26.886056 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:26.886128 master-0 kubenswrapper[7476]: I0320 08:35:26.886112 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:26.886470 master-0 kubenswrapper[7476]: I0320 08:35:26.886159 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:26.886470 master-0 kubenswrapper[7476]: E0320 08:35:26.886278 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 20 08:35:26.886470 master-0 kubenswrapper[7476]: E0320 08:35:26.886253 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:26.886470 master-0 kubenswrapper[7476]: E0320 08:35:26.886326 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.886313366 +0000 UTC m=+9.855081882 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "config" not found Mar 20 08:35:26.886470 master-0 kubenswrapper[7476]: E0320 08:35:26.886313 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:26.886470 master-0 kubenswrapper[7476]: E0320 08:35:26.886406 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.886371028 +0000 UTC m=+9.855139594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : secret "serving-cert" not found Mar 20 08:35:26.886843 master-0 kubenswrapper[7476]: E0320 08:35:26.886530 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:28.886493121 +0000 UTC m=+9.855261697 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "client-ca" not found Mar 20 08:35:27.058637 master-0 kubenswrapper[7476]: I0320 08:35:27.058510 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-trbxh"] Mar 20 08:35:27.059053 master-0 kubenswrapper[7476]: I0320 08:35:27.059021 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.061589 master-0 kubenswrapper[7476]: I0320 08:35:27.061027 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 08:35:27.061589 master-0 kubenswrapper[7476]: I0320 08:35:27.061295 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 08:35:27.061589 master-0 kubenswrapper[7476]: I0320 08:35:27.061355 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 08:35:27.062232 master-0 kubenswrapper[7476]: I0320 08:35:27.062189 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 08:35:27.075747 master-0 kubenswrapper[7476]: I0320 08:35:27.075475 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-trbxh"] Mar 20 08:35:27.089409 master-0 kubenswrapper[7476]: I0320 08:35:27.088546 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-key\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.089409 master-0 kubenswrapper[7476]: I0320 08:35:27.088703 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dkqm\" (UniqueName: \"kubernetes.io/projected/1746482a-d1a3-4eac-8bc9-643b6af75163-kube-api-access-2dkqm\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.089409 master-0 kubenswrapper[7476]: I0320 08:35:27.088781 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-cabundle\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.189160 master-0 kubenswrapper[7476]: I0320 08:35:27.189050 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkqm\" (UniqueName: \"kubernetes.io/projected/1746482a-d1a3-4eac-8bc9-643b6af75163-kube-api-access-2dkqm\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.189380 master-0 kubenswrapper[7476]: I0320 08:35:27.189323 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-cabundle\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.189517 master-0 kubenswrapper[7476]: I0320 08:35:27.189490 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-key\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.190342 master-0 kubenswrapper[7476]: I0320 08:35:27.190316 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-cabundle\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.192749 master-0 kubenswrapper[7476]: I0320 08:35:27.192712 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-key\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.309342 master-0 kubenswrapper[7476]: I0320 08:35:27.309288 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkqm\" (UniqueName: \"kubernetes.io/projected/1746482a-d1a3-4eac-8bc9-643b6af75163-kube-api-access-2dkqm\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.391293 master-0 kubenswrapper[7476]: I0320 08:35:27.391204 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:35:27.411576 master-0 kubenswrapper[7476]: I0320 08:35:27.411525 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:27.416894 master-0 kubenswrapper[7476]: I0320 08:35:27.416836 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-z9lwr"] Mar 20 08:35:27.417197 master-0 kubenswrapper[7476]: E0320 08:35:27.417163 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" podUID="f6539f0d-bbfe-47ea-9de4-6608fa7451fe" Mar 20 08:35:27.429007 master-0 kubenswrapper[7476]: I0320 08:35:27.428958 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v"] Mar 20 08:35:27.429346 master-0 kubenswrapper[7476]: E0320 08:35:27.429199 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" podUID="9dd14fb1-f122-42c0-a253-606691936519" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.003782 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.003866 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.003907 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.003946 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.003980 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.003994 7476 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004028 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004083 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004120 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004179 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004235 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004294 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004338 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004375 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004413 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: I0320 08:35:28.004456 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004489 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls podName:ff2dfe9d-2834-43cb-b093-0831b2b87131 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004464448 +0000 UTC m=+16.973233064 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls") pod "dns-operator-9c5679d8f-xfns6" (UID: "ff2dfe9d-2834-43cb-b093-0831b2b87131") : secret "metrics-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004548 7476 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004593 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004582931 +0000 UTC m=+16.973351567 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004645 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004675 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004665744 +0000 UTC m=+16.973434400 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004674 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004709 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004702185 +0000 UTC m=+16.973470711 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004742 7476 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004755 7476 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004779 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert podName:3776fdb6-25a1-4e3d-bdd1-437c69af3a55 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004771346 +0000 UTC m=+16.973540022 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert") pod "cluster-version-operator-56d8475767-jtqd4" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55") : secret "cluster-version-operator-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004815 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls podName:22f85e98-eb36-46b2-ab5d-7c21e060cba5 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004790947 +0000 UTC m=+16.973559503 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls") pod "ingress-operator-66b84d69b-dknxr" (UID: "22f85e98-eb36-46b2-ab5d-7c21e060cba5") : secret "metrics-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004823 7476 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004849 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004839528 +0000 UTC m=+16.973608194 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004883 7476 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004895 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004923 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.00490901 +0000 UTC m=+16.973677566 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004946 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.00493541 +0000 UTC m=+16.973703976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004949 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004992 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.004980642 +0000 UTC m=+16.973749198 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.004994 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005031 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.005022153 +0000 UTC m=+16.973790719 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005032 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005069 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.005060454 +0000 UTC m=+16.973829010 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "performance-addon-operator-webhook-cert" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005070 7476 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005104 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls podName:57189f7c-5987-457d-a299-0a6b9bcb3e24 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.005095865 +0000 UTC m=+16.973864521 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls") pod "cluster-image-registry-operator-5549dc66cb-cg8qr" (UID: "57189f7c-5987-457d-a299-0a6b9bcb3e24") : secret "image-registry-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005144 7476 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005153 7476 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005165 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls podName:6d26f719-43b9-4c1c-9a54-ff800177db68 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.005157866 +0000 UTC m=+16.973926532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls") pod "cluster-node-tuning-operator-598fbc5f8f-zxgdk" (UID: "6d26f719-43b9-4c1c-9a54-ff800177db68") : secret "node-tuning-operator-tls" not found Mar 20 08:35:28.005254 master-0 kubenswrapper[7476]: E0320 08:35:28.005211 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.005192937 +0000 UTC m=+16.973961573 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:28.314199 master-0 kubenswrapper[7476]: I0320 08:35:28.313473 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.314199 master-0 kubenswrapper[7476]: I0320 08:35:28.313984 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.314199 master-0 kubenswrapper[7476]: E0320 08:35:28.313767 7476 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:28.314199 master-0 kubenswrapper[7476]: I0320 08:35:28.314044 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.314199 master-0 kubenswrapper[7476]: E0320 08:35:28.314120 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:32.314092853 +0000 UTC m=+13.282861419 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : secret "serving-cert" not found Mar 20 08:35:28.316258 master-0 kubenswrapper[7476]: I0320 08:35:28.314685 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.316258 master-0 kubenswrapper[7476]: I0320 08:35:28.314858 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.316472 master-0 kubenswrapper[7476]: E0320 08:35:28.316251 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:28.316472 master-0 kubenswrapper[7476]: E0320 08:35:28.316406 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca podName:f6539f0d-bbfe-47ea-9de4-6608fa7451fe nodeName:}" failed. No retries permitted until 2026-03-20 08:35:32.316371822 +0000 UTC m=+13.285140348 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca") pod "controller-manager-f5df8899c-z9lwr" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe") : configmap "client-ca" not found Mar 20 08:35:28.317776 master-0 kubenswrapper[7476]: I0320 08:35:28.317720 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"controller-manager-f5df8899c-z9lwr\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.378035 master-0 kubenswrapper[7476]: I0320 08:35:28.377892 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-79bc6b8d76-trbxh"] Mar 20 08:35:28.422617 master-0 kubenswrapper[7476]: I0320 08:35:28.422506 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"11101104c4ad4a824dc013fe0f577cb2a24b3336015a3fb27c1b6da8054e07d4"} Mar 20 08:35:28.426381 master-0 kubenswrapper[7476]: I0320 08:35:28.424089 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:28.427913 master-0 kubenswrapper[7476]: I0320 08:35:28.427873 7476 generic.go:334] "Generic (PLEG): container finished" podID="e9425526-9f51-4302-a19d-a8107f56c582" containerID="7eace203ad1dd45a1a683d8c3e7772a2d39b397eee68cf9a1c7862a15d7b007d" exitCode=0 Mar 20 08:35:28.427986 master-0 kubenswrapper[7476]: I0320 08:35:28.427944 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerDied","Data":"7eace203ad1dd45a1a683d8c3e7772a2d39b397eee68cf9a1c7862a15d7b007d"} Mar 20 08:35:28.435477 master-0 kubenswrapper[7476]: I0320 08:35:28.434797 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"3165ad3f4e3423cb37420a9aeda1215c8c5bbcc445272eb7b11a146edfa5a4f0"} Mar 20 08:35:28.442371 master-0 kubenswrapper[7476]: I0320 08:35:28.436304 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" event={"ID":"890a6c24-1dbb-4331-952b-5712ac00788e","Type":"ContainerStarted","Data":"a18a8566386d7a1543a333d653f930cacf853d0d35feb9b3f545b9c786a7f62d"} Mar 20 08:35:28.442371 master-0 kubenswrapper[7476]: I0320 08:35:28.436313 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:28.442371 master-0 kubenswrapper[7476]: I0320 08:35:28.436332 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.458933 master-0 kubenswrapper[7476]: I0320 08:35:28.458589 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:28.465094 master-0 kubenswrapper[7476]: I0320 08:35:28.464986 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podStartSLOduration=1.347372207 podStartE2EDuration="5.464966268s" podCreationTimestamp="2026-03-20 08:35:23 +0000 UTC" firstStartedPulling="2026-03-20 08:35:24.038632416 +0000 UTC m=+5.007400972" lastFinishedPulling="2026-03-20 08:35:28.156226467 +0000 UTC m=+9.124995033" observedRunningTime="2026-03-20 08:35:28.464031784 +0000 UTC m=+9.432800320" watchObservedRunningTime="2026-03-20 08:35:28.464966268 +0000 UTC m=+9.433734794" Mar 20 08:35:28.479811 master-0 kubenswrapper[7476]: I0320 08:35:28.476572 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:28.518436 master-0 kubenswrapper[7476]: I0320 08:35:28.518394 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpfsh\" (UniqueName: \"kubernetes.io/projected/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-kube-api-access-mpfsh\") pod \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " Mar 20 08:35:28.518548 master-0 kubenswrapper[7476]: I0320 08:35:28.518460 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjrvg\" (UniqueName: \"kubernetes.io/projected/9dd14fb1-f122-42c0-a253-606691936519-kube-api-access-hjrvg\") pod \"9dd14fb1-f122-42c0-a253-606691936519\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " Mar 20 08:35:28.518548 master-0 kubenswrapper[7476]: I0320 08:35:28.518499 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") pod \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " Mar 20 08:35:28.518548 master-0 kubenswrapper[7476]: I0320 08:35:28.518545 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") pod \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\" (UID: \"f6539f0d-bbfe-47ea-9de4-6608fa7451fe\") " Mar 20 08:35:28.519156 master-0 kubenswrapper[7476]: I0320 08:35:28.519123 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config" (OuterVolumeSpecName: "config") pod "f6539f0d-bbfe-47ea-9de4-6608fa7451fe" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:28.519652 master-0 kubenswrapper[7476]: I0320 08:35:28.519620 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f6539f0d-bbfe-47ea-9de4-6608fa7451fe" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:28.520233 master-0 kubenswrapper[7476]: I0320 08:35:28.520080 7476 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:28.520233 master-0 kubenswrapper[7476]: I0320 08:35:28.520125 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:28.525556 master-0 kubenswrapper[7476]: I0320 08:35:28.525494 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-kube-api-access-mpfsh" (OuterVolumeSpecName: "kube-api-access-mpfsh") pod "f6539f0d-bbfe-47ea-9de4-6608fa7451fe" (UID: "f6539f0d-bbfe-47ea-9de4-6608fa7451fe"). InnerVolumeSpecName "kube-api-access-mpfsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:28.525693 master-0 kubenswrapper[7476]: I0320 08:35:28.525603 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dd14fb1-f122-42c0-a253-606691936519-kube-api-access-hjrvg" (OuterVolumeSpecName: "kube-api-access-hjrvg") pod "9dd14fb1-f122-42c0-a253-606691936519" (UID: "9dd14fb1-f122-42c0-a253-606691936519"). InnerVolumeSpecName "kube-api-access-hjrvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:28.620933 master-0 kubenswrapper[7476]: I0320 08:35:28.620859 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mpfsh\" (UniqueName: \"kubernetes.io/projected/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-kube-api-access-mpfsh\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:28.620933 master-0 kubenswrapper[7476]: I0320 08:35:28.620921 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjrvg\" (UniqueName: \"kubernetes.io/projected/9dd14fb1-f122-42c0-a253-606691936519-kube-api-access-hjrvg\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:28.751034 master-0 kubenswrapper[7476]: I0320 08:35:28.750915 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:28.751034 master-0 kubenswrapper[7476]: I0320 08:35:28.751009 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:28.756981 master-0 kubenswrapper[7476]: I0320 08:35:28.756919 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:35:28.924739 master-0 kubenswrapper[7476]: I0320 08:35:28.924677 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:28.924905 master-0 kubenswrapper[7476]: I0320 08:35:28.924755 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:28.924959 master-0 kubenswrapper[7476]: E0320 08:35:28.924890 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:28.924991 master-0 kubenswrapper[7476]: E0320 08:35:28.924973 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:32.924955625 +0000 UTC m=+13.893724151 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : secret "serving-cert" not found Mar 20 08:35:28.925055 master-0 kubenswrapper[7476]: I0320 08:35:28.925025 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:28.925300 master-0 kubenswrapper[7476]: E0320 08:35:28.925031 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:28.925338 master-0 kubenswrapper[7476]: E0320 08:35:28.925317 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca podName:9dd14fb1-f122-42c0-a253-606691936519 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:32.925308164 +0000 UTC m=+13.894076690 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca") pod "route-controller-manager-6cd6978d68-qxk9v" (UID: "9dd14fb1-f122-42c0-a253-606691936519") : configmap "client-ca" not found Mar 20 08:35:28.925845 master-0 kubenswrapper[7476]: I0320 08:35:28.925806 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"route-controller-manager-6cd6978d68-qxk9v\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:29.026554 master-0 kubenswrapper[7476]: I0320 08:35:29.026426 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") pod \"9dd14fb1-f122-42c0-a253-606691936519\" (UID: \"9dd14fb1-f122-42c0-a253-606691936519\") " Mar 20 08:35:29.026916 master-0 kubenswrapper[7476]: I0320 08:35:29.026870 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config" (OuterVolumeSpecName: "config") pod "9dd14fb1-f122-42c0-a253-606691936519" (UID: "9dd14fb1-f122-42c0-a253-606691936519"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:29.027097 master-0 kubenswrapper[7476]: I0320 08:35:29.027072 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:29.443288 master-0 kubenswrapper[7476]: I0320 08:35:29.442800 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" event={"ID":"890a6c24-1dbb-4331-952b-5712ac00788e","Type":"ContainerStarted","Data":"056a248e264cdde362e6f3914beaa4b2d0c7a756342a561e27e19d7c2d2f2578"} Mar 20 08:35:29.456734 master-0 kubenswrapper[7476]: I0320 08:35:29.455854 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerStarted","Data":"a1b4eceb0f2328786d0d5d45adc257b068090b4a532ca9b2a6eb0db19b8abba4"} Mar 20 08:35:29.456734 master-0 kubenswrapper[7476]: I0320 08:35:29.455922 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerStarted","Data":"b9cc3cdb71ca86a1d6eb5065d5ba830d901adeb7f41acd8f39de6f44ff6001ce"} Mar 20 08:35:29.456734 master-0 kubenswrapper[7476]: I0320 08:35:29.455983 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v" Mar 20 08:35:29.456969 master-0 kubenswrapper[7476]: I0320 08:35:29.456837 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f5df8899c-z9lwr" Mar 20 08:35:29.459208 master-0 kubenswrapper[7476]: I0320 08:35:29.459124 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" podStartSLOduration=2.011124592 podStartE2EDuration="5.45910114s" podCreationTimestamp="2026-03-20 08:35:24 +0000 UTC" firstStartedPulling="2026-03-20 08:35:24.769382861 +0000 UTC m=+5.738151377" lastFinishedPulling="2026-03-20 08:35:28.217359399 +0000 UTC m=+9.186127925" observedRunningTime="2026-03-20 08:35:29.458939066 +0000 UTC m=+10.427707612" watchObservedRunningTime="2026-03-20 08:35:29.45910114 +0000 UTC m=+10.427869666" Mar 20 08:35:29.518323 master-0 kubenswrapper[7476]: I0320 08:35:29.517522 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" podStartSLOduration=3.517489752 podStartE2EDuration="3.517489752s" podCreationTimestamp="2026-03-20 08:35:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:29.491621952 +0000 UTC m=+10.460390498" watchObservedRunningTime="2026-03-20 08:35:29.517489752 +0000 UTC m=+10.486258298" Mar 20 08:35:29.530286 master-0 kubenswrapper[7476]: I0320 08:35:29.527371 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-z9lwr"] Mar 20 08:35:29.550359 master-0 kubenswrapper[7476]: I0320 08:35:29.536659 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f5df8899c-z9lwr"] Mar 20 08:35:29.571286 master-0 kubenswrapper[7476]: I0320 08:35:29.561841 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v"] Mar 20 08:35:29.589410 master-0 kubenswrapper[7476]: I0320 08:35:29.580158 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cd6978d68-qxk9v"] Mar 20 08:35:29.636343 master-0 kubenswrapper[7476]: I0320 08:35:29.635134 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9dd14fb1-f122-42c0-a253-606691936519-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:29.636343 master-0 kubenswrapper[7476]: I0320 08:35:29.635189 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:29.636343 master-0 kubenswrapper[7476]: I0320 08:35:29.635199 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9dd14fb1-f122-42c0-a253-606691936519-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:29.636343 master-0 kubenswrapper[7476]: I0320 08:35:29.635208 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6539f0d-bbfe-47ea-9de4-6608fa7451fe-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:29.874398 master-0 kubenswrapper[7476]: I0320 08:35:29.874336 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:35:30.160602 master-0 kubenswrapper[7476]: I0320 08:35:30.160489 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-c9f7c448d-k8dq9"] Mar 20 08:35:30.161042 master-0 kubenswrapper[7476]: I0320 08:35:30.161020 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.165033 master-0 kubenswrapper[7476]: I0320 08:35:30.163416 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:35:30.165033 master-0 kubenswrapper[7476]: I0320 08:35:30.163737 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:35:30.165033 master-0 kubenswrapper[7476]: I0320 08:35:30.163893 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:35:30.165033 master-0 kubenswrapper[7476]: I0320 08:35:30.164096 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:35:30.165033 master-0 kubenswrapper[7476]: I0320 08:35:30.164311 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:35:30.171100 master-0 kubenswrapper[7476]: I0320 08:35:30.171073 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:35:30.179107 master-0 kubenswrapper[7476]: I0320 08:35:30.179059 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-c9f7c448d-k8dq9"] Mar 20 08:35:30.248047 master-0 kubenswrapper[7476]: I0320 08:35:30.247990 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxz2v\" (UniqueName: \"kubernetes.io/projected/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-kube-api-access-bxz2v\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.248047 master-0 kubenswrapper[7476]: I0320 08:35:30.248044 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.248364 master-0 kubenswrapper[7476]: I0320 08:35:30.248195 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-proxy-ca-bundles\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.248428 master-0 kubenswrapper[7476]: I0320 08:35:30.248349 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-config\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.248428 master-0 kubenswrapper[7476]: I0320 08:35:30.248409 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.349113 master-0 kubenswrapper[7476]: I0320 08:35:30.349054 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bxz2v\" (UniqueName: \"kubernetes.io/projected/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-kube-api-access-bxz2v\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.349113 master-0 kubenswrapper[7476]: I0320 08:35:30.349105 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.349404 master-0 kubenswrapper[7476]: E0320 08:35:30.349208 7476 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:30.349404 master-0 kubenswrapper[7476]: E0320 08:35:30.349256 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:30.849241761 +0000 UTC m=+11.818010287 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : secret "serving-cert" not found Mar 20 08:35:30.349566 master-0 kubenswrapper[7476]: I0320 08:35:30.349501 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-proxy-ca-bundles\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.349622 master-0 kubenswrapper[7476]: I0320 08:35:30.349588 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-config\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.349622 master-0 kubenswrapper[7476]: I0320 08:35:30.349612 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.349882 master-0 kubenswrapper[7476]: E0320 08:35:30.349848 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:30.349945 master-0 kubenswrapper[7476]: E0320 08:35:30.349927 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:30.849908608 +0000 UTC m=+11.818677144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : configmap "client-ca" not found Mar 20 08:35:30.352409 master-0 kubenswrapper[7476]: I0320 08:35:30.350877 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-proxy-ca-bundles\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.352409 master-0 kubenswrapper[7476]: I0320 08:35:30.351038 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-config\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.412379 master-0 kubenswrapper[7476]: I0320 08:35:30.408626 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bxz2v\" (UniqueName: \"kubernetes.io/projected/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-kube-api-access-bxz2v\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.711347 master-0 kubenswrapper[7476]: I0320 08:35:30.710990 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:30.711347 master-0 kubenswrapper[7476]: I0320 08:35:30.711170 7476 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:35:30.737286 master-0 kubenswrapper[7476]: I0320 08:35:30.737130 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:35:30.860570 master-0 kubenswrapper[7476]: I0320 08:35:30.860503 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.860760 master-0 kubenswrapper[7476]: E0320 08:35:30.860703 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:30.860797 master-0 kubenswrapper[7476]: I0320 08:35:30.860780 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:30.860836 master-0 kubenswrapper[7476]: E0320 08:35:30.860796 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:31.860775212 +0000 UTC m=+12.829543848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : configmap "client-ca" not found Mar 20 08:35:30.860934 master-0 kubenswrapper[7476]: E0320 08:35:30.860895 7476 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:30.860984 master-0 kubenswrapper[7476]: E0320 08:35:30.860976 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:31.860958646 +0000 UTC m=+12.829727182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : secret "serving-cert" not found Mar 20 08:35:31.246692 master-0 kubenswrapper[7476]: I0320 08:35:31.246630 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dd14fb1-f122-42c0-a253-606691936519" path="/var/lib/kubelet/pods/9dd14fb1-f122-42c0-a253-606691936519/volumes" Mar 20 08:35:31.246973 master-0 kubenswrapper[7476]: I0320 08:35:31.246944 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6539f0d-bbfe-47ea-9de4-6608fa7451fe" path="/var/lib/kubelet/pods/f6539f0d-bbfe-47ea-9de4-6608fa7451fe/volumes" Mar 20 08:35:31.874732 master-0 kubenswrapper[7476]: I0320 08:35:31.874636 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:31.875766 master-0 kubenswrapper[7476]: I0320 08:35:31.874771 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:31.875766 master-0 kubenswrapper[7476]: E0320 08:35:31.874896 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:31.875766 master-0 kubenswrapper[7476]: E0320 08:35:31.874966 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:33.874944383 +0000 UTC m=+14.843712919 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : configmap "client-ca" not found Mar 20 08:35:31.880467 master-0 kubenswrapper[7476]: I0320 08:35:31.880418 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:32.158043 master-0 kubenswrapper[7476]: I0320 08:35:32.157927 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6"] Mar 20 08:35:32.158946 master-0 kubenswrapper[7476]: I0320 08:35:32.158907 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.165432 master-0 kubenswrapper[7476]: I0320 08:35:32.165370 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:35:32.165702 master-0 kubenswrapper[7476]: I0320 08:35:32.165681 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:35:32.167979 master-0 kubenswrapper[7476]: I0320 08:35:32.167941 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6"] Mar 20 08:35:32.170183 master-0 kubenswrapper[7476]: I0320 08:35:32.170148 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:35:32.170476 master-0 kubenswrapper[7476]: I0320 08:35:32.170448 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:35:32.170836 master-0 kubenswrapper[7476]: I0320 08:35:32.170808 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:35:32.178948 master-0 kubenswrapper[7476]: I0320 08:35:32.178905 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-config\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.179117 master-0 kubenswrapper[7476]: I0320 08:35:32.178991 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.179117 master-0 kubenswrapper[7476]: I0320 08:35:32.179082 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmsql\" (UniqueName: \"kubernetes.io/projected/44d1bab1-22a1-45f4-b722-afef91f56a31-kube-api-access-rmsql\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.179218 master-0 kubenswrapper[7476]: I0320 08:35:32.179192 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.280278 master-0 kubenswrapper[7476]: I0320 08:35:32.280189 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmsql\" (UniqueName: \"kubernetes.io/projected/44d1bab1-22a1-45f4-b722-afef91f56a31-kube-api-access-rmsql\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.280537 master-0 kubenswrapper[7476]: I0320 08:35:32.280375 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.280537 master-0 kubenswrapper[7476]: I0320 08:35:32.280453 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-config\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.280537 master-0 kubenswrapper[7476]: I0320 08:35:32.280500 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.280664 master-0 kubenswrapper[7476]: E0320 08:35:32.280628 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:32.280706 master-0 kubenswrapper[7476]: E0320 08:35:32.280680 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:32.780666903 +0000 UTC m=+13.749435429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : configmap "client-ca" not found Mar 20 08:35:32.281443 master-0 kubenswrapper[7476]: E0320 08:35:32.280957 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:32.281443 master-0 kubenswrapper[7476]: E0320 08:35:32.280991 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:32.780982973 +0000 UTC m=+13.749751499 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : secret "serving-cert" not found Mar 20 08:35:32.282210 master-0 kubenswrapper[7476]: I0320 08:35:32.281905 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-config\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.339364 master-0 kubenswrapper[7476]: I0320 08:35:32.339317 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmsql\" (UniqueName: \"kubernetes.io/projected/44d1bab1-22a1-45f4-b722-afef91f56a31-kube-api-access-rmsql\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.468130 master-0 kubenswrapper[7476]: I0320 08:35:32.467958 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerStarted","Data":"ede2ef38ba8d0fe732989d57db50e82ff2ef33b1e7f1869b8d140d9c93969650"} Mar 20 08:35:32.785948 master-0 kubenswrapper[7476]: I0320 08:35:32.785790 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.786251 master-0 kubenswrapper[7476]: E0320 08:35:32.786019 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:32.786251 master-0 kubenswrapper[7476]: E0320 08:35:32.786131 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:33.786104317 +0000 UTC m=+14.754872843 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : secret "serving-cert" not found Mar 20 08:35:32.786251 master-0 kubenswrapper[7476]: I0320 08:35:32.786125 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:32.786409 master-0 kubenswrapper[7476]: E0320 08:35:32.786383 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:32.786468 master-0 kubenswrapper[7476]: E0320 08:35:32.786447 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:33.786432945 +0000 UTC m=+14.755201691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : configmap "client-ca" not found Mar 20 08:35:33.465765 master-0 kubenswrapper[7476]: I0320 08:35:33.465696 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:33.473495 master-0 kubenswrapper[7476]: I0320 08:35:33.473446 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerStarted","Data":"59a8653cd7835805f3353ca3030def7794cc3d5df739fff211964fc11ce38845"} Mar 20 08:35:33.803017 master-0 kubenswrapper[7476]: I0320 08:35:33.802887 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:33.803222 master-0 kubenswrapper[7476]: E0320 08:35:33.803093 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:33.803299 master-0 kubenswrapper[7476]: E0320 08:35:33.803242 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:35.803210634 +0000 UTC m=+16.771979170 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : configmap "client-ca" not found Mar 20 08:35:33.803565 master-0 kubenswrapper[7476]: I0320 08:35:33.803521 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:33.803899 master-0 kubenswrapper[7476]: E0320 08:35:33.803825 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:33.804021 master-0 kubenswrapper[7476]: E0320 08:35:33.803986 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:35.803947663 +0000 UTC m=+16.772716429 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : secret "serving-cert" not found Mar 20 08:35:33.905138 master-0 kubenswrapper[7476]: I0320 08:35:33.905058 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:33.905422 master-0 kubenswrapper[7476]: E0320 08:35:33.905283 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:33.905422 master-0 kubenswrapper[7476]: E0320 08:35:33.905418 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:37.905390688 +0000 UTC m=+18.874159214 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : configmap "client-ca" not found Mar 20 08:35:34.411462 master-0 kubenswrapper[7476]: I0320 08:35:34.411384 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:35:34.411826 master-0 kubenswrapper[7476]: I0320 08:35:34.411498 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:35:34.479719 master-0 kubenswrapper[7476]: I0320 08:35:34.479634 7476 generic.go:334] "Generic (PLEG): container finished" podID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerID="11101104c4ad4a824dc013fe0f577cb2a24b3336015a3fb27c1b6da8054e07d4" exitCode=0 Mar 20 08:35:34.480363 master-0 kubenswrapper[7476]: I0320 08:35:34.479746 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerDied","Data":"11101104c4ad4a824dc013fe0f577cb2a24b3336015a3fb27c1b6da8054e07d4"} Mar 20 08:35:34.480625 master-0 kubenswrapper[7476]: I0320 08:35:34.480582 7476 scope.go:117] "RemoveContainer" containerID="11101104c4ad4a824dc013fe0f577cb2a24b3336015a3fb27c1b6da8054e07d4" Mar 20 08:35:35.126869 master-0 kubenswrapper[7476]: I0320 08:35:35.126208 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-555b9794f6-68k4f"] Mar 20 08:35:35.128292 master-0 kubenswrapper[7476]: I0320 08:35:35.127743 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.134777 master-0 kubenswrapper[7476]: I0320 08:35:35.134729 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 20 08:35:35.134966 master-0 kubenswrapper[7476]: I0320 08:35:35.134811 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 08:35:35.136556 master-0 kubenswrapper[7476]: I0320 08:35:35.136502 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 08:35:35.136823 master-0 kubenswrapper[7476]: I0320 08:35:35.136769 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 08:35:35.136943 master-0 kubenswrapper[7476]: I0320 08:35:35.136901 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 20 08:35:35.137074 master-0 kubenswrapper[7476]: I0320 08:35:35.137014 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 08:35:35.137160 master-0 kubenswrapper[7476]: I0320 08:35:35.137136 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 08:35:35.137198 master-0 kubenswrapper[7476]: I0320 08:35:35.137152 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 08:35:35.137367 master-0 kubenswrapper[7476]: I0320 08:35:35.137346 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 08:35:35.145217 master-0 kubenswrapper[7476]: I0320 08:35:35.145149 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-555b9794f6-68k4f"] Mar 20 08:35:35.145568 master-0 kubenswrapper[7476]: I0320 08:35:35.145537 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 08:35:35.229282 master-0 kubenswrapper[7476]: I0320 08:35:35.228977 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229282 master-0 kubenswrapper[7476]: I0320 08:35:35.229076 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-client\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229282 master-0 kubenswrapper[7476]: I0320 08:35:35.229138 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-serving-ca\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229282 master-0 kubenswrapper[7476]: I0320 08:35:35.229173 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229282 master-0 kubenswrapper[7476]: I0320 08:35:35.229199 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-audit-dir\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229282 master-0 kubenswrapper[7476]: I0320 08:35:35.229220 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-config\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229644 master-0 kubenswrapper[7476]: I0320 08:35:35.229311 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-trusted-ca-bundle\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229644 master-0 kubenswrapper[7476]: I0320 08:35:35.229348 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8vg9\" (UniqueName: \"kubernetes.io/projected/9983fdac-91cb-4f06-b39d-9306adef4071-kube-api-access-v8vg9\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229644 master-0 kubenswrapper[7476]: I0320 08:35:35.229398 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-node-pullsecrets\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229644 master-0 kubenswrapper[7476]: I0320 08:35:35.229498 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-encryption-config\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.229644 master-0 kubenswrapper[7476]: I0320 08:35:35.229518 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-image-import-ca\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.330919 master-0 kubenswrapper[7476]: I0320 08:35:35.330837 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8vg9\" (UniqueName: \"kubernetes.io/projected/9983fdac-91cb-4f06-b39d-9306adef4071-kube-api-access-v8vg9\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331406 master-0 kubenswrapper[7476]: I0320 08:35:35.331370 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-node-pullsecrets\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331500 master-0 kubenswrapper[7476]: I0320 08:35:35.331482 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-encryption-config\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331543 master-0 kubenswrapper[7476]: I0320 08:35:35.331505 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-image-import-ca\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331543 master-0 kubenswrapper[7476]: I0320 08:35:35.331532 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331600 master-0 kubenswrapper[7476]: I0320 08:35:35.331582 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-client\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331600 master-0 kubenswrapper[7476]: I0320 08:35:35.331597 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-serving-ca\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331650 master-0 kubenswrapper[7476]: I0320 08:35:35.331628 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331683 master-0 kubenswrapper[7476]: I0320 08:35:35.331655 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-audit-dir\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331710 master-0 kubenswrapper[7476]: I0320 08:35:35.331682 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-config\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.331737 master-0 kubenswrapper[7476]: I0320 08:35:35.331726 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-trusted-ca-bundle\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.332625 master-0 kubenswrapper[7476]: I0320 08:35:35.332604 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-trusted-ca-bundle\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.332680 master-0 kubenswrapper[7476]: I0320 08:35:35.332660 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-node-pullsecrets\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: E0320 08:35:35.333175 7476 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: E0320 08:35:35.333216 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:35.833206066 +0000 UTC m=+16.801974592 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : configmap "audit-0" not found Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: I0320 08:35:35.333660 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-image-import-ca\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: I0320 08:35:35.333724 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-serving-ca\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: I0320 08:35:35.333784 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-audit-dir\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: E0320 08:35:35.333855 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: E0320 08:35:35.333892 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:35.833879853 +0000 UTC m=+16.802648379 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : secret "serving-cert" not found Mar 20 08:35:35.337280 master-0 kubenswrapper[7476]: I0320 08:35:35.334322 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-config\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.340353 master-0 kubenswrapper[7476]: I0320 08:35:35.338990 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-encryption-config\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.340893 master-0 kubenswrapper[7476]: I0320 08:35:35.340854 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-client\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.352854 master-0 kubenswrapper[7476]: I0320 08:35:35.352817 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8vg9\" (UniqueName: \"kubernetes.io/projected/9983fdac-91cb-4f06-b39d-9306adef4071-kube-api-access-v8vg9\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.498375 master-0 kubenswrapper[7476]: I0320 08:35:35.494173 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"bd1d0759a3b11f191f5c7889c156ee6e269182c73bbc7176808f512fb2f1ec9d"} Mar 20 08:35:35.498375 master-0 kubenswrapper[7476]: I0320 08:35:35.494488 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:35.839649 master-0 kubenswrapper[7476]: I0320 08:35:35.839487 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.839954 master-0 kubenswrapper[7476]: E0320 08:35:35.839686 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:35.839954 master-0 kubenswrapper[7476]: I0320 08:35:35.839755 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:35.839954 master-0 kubenswrapper[7476]: E0320 08:35:35.839768 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.839749628 +0000 UTC m=+17.808518154 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : secret "serving-cert" not found Mar 20 08:35:35.840170 master-0 kubenswrapper[7476]: E0320 08:35:35.840058 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:35.840340 master-0 kubenswrapper[7476]: I0320 08:35:35.840194 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:35.840340 master-0 kubenswrapper[7476]: E0320 08:35:35.840237 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:35.840537 master-0 kubenswrapper[7476]: E0320 08:35:35.840359 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:39.840323542 +0000 UTC m=+20.809092108 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : configmap "client-ca" not found Mar 20 08:35:35.840537 master-0 kubenswrapper[7476]: E0320 08:35:35.840399 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:39.840381454 +0000 UTC m=+20.809150120 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : secret "serving-cert" not found Mar 20 08:35:35.840738 master-0 kubenswrapper[7476]: I0320 08:35:35.840555 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:35.840738 master-0 kubenswrapper[7476]: E0320 08:35:35.840666 7476 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 20 08:35:35.840944 master-0 kubenswrapper[7476]: E0320 08:35:35.840755 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:36.840732613 +0000 UTC m=+17.809501199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : configmap "audit-0" not found Mar 20 08:35:36.044488 master-0 kubenswrapper[7476]: I0320 08:35:36.044390 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:36.044488 master-0 kubenswrapper[7476]: I0320 08:35:36.044494 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:36.044730 master-0 kubenswrapper[7476]: I0320 08:35:36.044557 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:36.044730 master-0 kubenswrapper[7476]: I0320 08:35:36.044605 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:36.044954 master-0 kubenswrapper[7476]: I0320 08:35:36.044883 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:36.045035 master-0 kubenswrapper[7476]: I0320 08:35:36.045001 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:36.045074 master-0 kubenswrapper[7476]: E0320 08:35:36.045041 7476 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 20 08:35:36.045110 master-0 kubenswrapper[7476]: I0320 08:35:36.045081 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:36.045170 master-0 kubenswrapper[7476]: E0320 08:35:36.045146 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics podName:23003a2f-2053-47cc-8133-23eb886d4da0 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.045114793 +0000 UTC m=+33.013883359 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics") pod "marketplace-operator-89ccd998f-j84r8" (UID: "23003a2f-2053-47cc-8133-23eb886d4da0") : secret "marketplace-operator-metrics" not found Mar 20 08:35:36.045279 master-0 kubenswrapper[7476]: I0320 08:35:36.045223 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:36.045402 master-0 kubenswrapper[7476]: I0320 08:35:36.045364 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:36.045456 master-0 kubenswrapper[7476]: I0320 08:35:36.045424 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:36.045517 master-0 kubenswrapper[7476]: I0320 08:35:36.045490 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:36.045569 master-0 kubenswrapper[7476]: I0320 08:35:36.045542 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:36.045620 master-0 kubenswrapper[7476]: I0320 08:35:36.045595 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:36.045701 master-0 kubenswrapper[7476]: I0320 08:35:36.045672 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:36.045753 master-0 kubenswrapper[7476]: E0320 08:35:36.045227 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 20 08:35:36.045814 master-0 kubenswrapper[7476]: E0320 08:35:36.045792 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert podName:0e79950f-50a5-46ec-b836-7a35dcce2851 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.04577733 +0000 UTC m=+33.014545896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert") pod "package-server-manager-7b95f86987-cgc9q" (UID: "0e79950f-50a5-46ec-b836-7a35dcce2851") : secret "package-server-manager-serving-cert" not found Mar 20 08:35:36.045851 master-0 kubenswrapper[7476]: E0320 08:35:36.045293 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 20 08:35:36.045884 master-0 kubenswrapper[7476]: E0320 08:35:36.045870 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert podName:7ab32efc-7cc5-4e36-9c1c-05efb19914e2 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.045853152 +0000 UTC m=+33.014621708 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert") pod "olm-operator-5c9796789-t926t" (UID: "7ab32efc-7cc5-4e36-9c1c-05efb19914e2") : secret "olm-operator-serving-cert" not found Mar 20 08:35:36.045922 master-0 kubenswrapper[7476]: E0320 08:35:36.045366 7476 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 20 08:35:36.045956 master-0 kubenswrapper[7476]: E0320 08:35:36.045932 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs podName:00350ac7-b40a-4459-b94c-a37d7b613645 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.045917844 +0000 UTC m=+33.014686400 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs") pod "network-metrics-daemon-nfrth" (UID: "00350ac7-b40a-4459-b94c-a37d7b613645") : secret "metrics-daemon-secret" not found Mar 20 08:35:36.045956 master-0 kubenswrapper[7476]: E0320 08:35:36.045418 7476 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 20 08:35:36.046037 master-0 kubenswrapper[7476]: E0320 08:35:36.045988 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert podName:9ce482dc-d0ac-40bc-9058-a1cfdc81575e nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.045975166 +0000 UTC m=+33.014743732 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert") pod "catalog-operator-68f85b4d6c-hdw98" (UID: "9ce482dc-d0ac-40bc-9058-a1cfdc81575e") : secret "catalog-operator-serving-cert" not found Mar 20 08:35:36.046037 master-0 kubenswrapper[7476]: E0320 08:35:36.045939 7476 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 20 08:35:36.046150 master-0 kubenswrapper[7476]: E0320 08:35:36.046124 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.046085279 +0000 UTC m=+33.014853985 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : secret "multus-admission-controller-secret" not found Mar 20 08:35:36.046245 master-0 kubenswrapper[7476]: I0320 08:35:36.046213 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:36.048607 master-0 kubenswrapper[7476]: E0320 08:35:36.047370 7476 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:36.048607 master-0 kubenswrapper[7476]: E0320 08:35:36.047455 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls podName:5707066a-bd66-41bc-8cea-cff1630ab5ee nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.047432523 +0000 UTC m=+33.016201079 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-58845fbb57-6vgt6" (UID: "5707066a-bd66-41bc-8cea-cff1630ab5ee") : secret "cluster-monitoring-operator-tls" not found Mar 20 08:35:36.049734 master-0 kubenswrapper[7476]: E0320 08:35:36.049669 7476 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:36.049796 master-0 kubenswrapper[7476]: E0320 08:35:36.049779 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert podName:f202273a-b111-46ce-b404-7e481d2c7ff9 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.049753833 +0000 UTC m=+33.018522369 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert") pod "cluster-baremetal-operator-6f69995874-b25f2" (UID: "f202273a-b111-46ce-b404-7e481d2c7ff9") : secret "cluster-baremetal-webhook-server-cert" not found Mar 20 08:35:36.051155 master-0 kubenswrapper[7476]: I0320 08:35:36.050601 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:36.051278 master-0 kubenswrapper[7476]: I0320 08:35:36.051230 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:36.054405 master-0 kubenswrapper[7476]: I0320 08:35:36.054340 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:36.055461 master-0 kubenswrapper[7476]: I0320 08:35:36.055413 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:36.055770 master-0 kubenswrapper[7476]: I0320 08:35:36.055731 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:36.056389 master-0 kubenswrapper[7476]: I0320 08:35:36.056334 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"cluster-version-operator-56d8475767-jtqd4\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:36.056776 master-0 kubenswrapper[7476]: I0320 08:35:36.056720 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:36.079010 master-0 kubenswrapper[7476]: I0320 08:35:36.078952 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:35:36.079010 master-0 kubenswrapper[7476]: I0320 08:35:36.078990 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:35:36.079313 master-0 kubenswrapper[7476]: I0320 08:35:36.079232 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:35:36.089334 master-0 kubenswrapper[7476]: I0320 08:35:36.084982 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:35:36.090554 master-0 kubenswrapper[7476]: I0320 08:35:36.090493 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:35:36.149962 master-0 kubenswrapper[7476]: W0320 08:35:36.149902 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3776fdb6_25a1_4e3d_bdd1_437c69af3a55.slice/crio-aa0e338538aafee4fbc36907bfe4019e2f3a8c90665916ca155d6ae8d2916484 WatchSource:0}: Error finding container aa0e338538aafee4fbc36907bfe4019e2f3a8c90665916ca155d6ae8d2916484: Status 404 returned error can't find the container with id aa0e338538aafee4fbc36907bfe4019e2f3a8c90665916ca155d6ae8d2916484 Mar 20 08:35:36.392398 master-0 kubenswrapper[7476]: I0320 08:35:36.391439 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk"] Mar 20 08:35:36.414983 master-0 kubenswrapper[7476]: I0320 08:35:36.414945 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr"] Mar 20 08:35:36.423501 master-0 kubenswrapper[7476]: W0320 08:35:36.423241 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57189f7c_5987_457d_a299_0a6b9bcb3e24.slice/crio-389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5 WatchSource:0}: Error finding container 389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5: Status 404 returned error can't find the container with id 389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5 Mar 20 08:35:36.430789 master-0 kubenswrapper[7476]: I0320 08:35:36.430750 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-66b84d69b-dknxr"] Mar 20 08:35:36.446417 master-0 kubenswrapper[7476]: I0320 08:35:36.446334 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-xfns6"] Mar 20 08:35:36.455677 master-0 kubenswrapper[7476]: W0320 08:35:36.455637 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff2dfe9d_2834_43cb_b093_0831b2b87131.slice/crio-cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e WatchSource:0}: Error finding container cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e: Status 404 returned error can't find the container with id cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e Mar 20 08:35:36.498117 master-0 kubenswrapper[7476]: I0320 08:35:36.498084 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"e19e3ca7f7f87202999ccf51b5e641a2b701234ac17e2a8733f102ed0960e44b"} Mar 20 08:35:36.498896 master-0 kubenswrapper[7476]: I0320 08:35:36.498868 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" event={"ID":"57189f7c-5987-457d-a299-0a6b9bcb3e24","Type":"ContainerStarted","Data":"389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5"} Mar 20 08:35:36.500488 master-0 kubenswrapper[7476]: I0320 08:35:36.500433 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9xlf2" event={"ID":"b097596e-79e1-44d1-be8a-96340042a041","Type":"ContainerStarted","Data":"91146652d4d8a8a47620378773d0a419398c4e57461915eca0a376f8bd53b8e3"} Mar 20 08:35:36.501482 master-0 kubenswrapper[7476]: I0320 08:35:36.501456 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" event={"ID":"3776fdb6-25a1-4e3d-bdd1-437c69af3a55","Type":"ContainerStarted","Data":"aa0e338538aafee4fbc36907bfe4019e2f3a8c90665916ca155d6ae8d2916484"} Mar 20 08:35:36.502519 master-0 kubenswrapper[7476]: I0320 08:35:36.502456 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" event={"ID":"ff2dfe9d-2834-43cb-b093-0831b2b87131","Type":"ContainerStarted","Data":"cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e"} Mar 20 08:35:36.504004 master-0 kubenswrapper[7476]: I0320 08:35:36.503964 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" event={"ID":"6d26f719-43b9-4c1c-9a54-ff800177db68","Type":"ContainerStarted","Data":"702713f2f96146013bc9672b7b029fe7154bd722d3f9153e565a46fd2b9a50ba"} Mar 20 08:35:36.867630 master-0 kubenswrapper[7476]: I0320 08:35:36.867512 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:36.867946 master-0 kubenswrapper[7476]: E0320 08:35:36.867821 7476 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 20 08:35:36.867946 master-0 kubenswrapper[7476]: E0320 08:35:36.867943 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:38.867914711 +0000 UTC m=+19.836683447 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : configmap "audit-0" not found Mar 20 08:35:36.868207 master-0 kubenswrapper[7476]: I0320 08:35:36.867835 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:36.868207 master-0 kubenswrapper[7476]: E0320 08:35:36.868022 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:36.871323 master-0 kubenswrapper[7476]: E0320 08:35:36.868303 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:38.868200278 +0000 UTC m=+19.836968844 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : secret "serving-cert" not found Mar 20 08:35:37.984956 master-0 kubenswrapper[7476]: I0320 08:35:37.984323 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:37.984956 master-0 kubenswrapper[7476]: E0320 08:35:37.984476 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:37.984956 master-0 kubenswrapper[7476]: E0320 08:35:37.984634 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:35:45.984607065 +0000 UTC m=+26.953375591 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : configmap "client-ca" not found Mar 20 08:35:38.898201 master-0 kubenswrapper[7476]: I0320 08:35:38.897742 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:38.898864 master-0 kubenswrapper[7476]: I0320 08:35:38.898833 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert\") pod \"apiserver-555b9794f6-68k4f\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:38.912934 master-0 kubenswrapper[7476]: E0320 08:35:38.898716 7476 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 20 08:35:38.913332 master-0 kubenswrapper[7476]: E0320 08:35:38.913310 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:42.913258112 +0000 UTC m=+23.882026648 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : configmap "audit-0" not found Mar 20 08:35:38.913493 master-0 kubenswrapper[7476]: E0320 08:35:38.912607 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:38.913630 master-0 kubenswrapper[7476]: E0320 08:35:38.913614 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert podName:9983fdac-91cb-4f06-b39d-9306adef4071 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:42.91359562 +0000 UTC m=+23.882364156 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert") pod "apiserver-555b9794f6-68k4f" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071") : secret "serving-cert" not found Mar 20 08:35:39.147368 master-0 kubenswrapper[7476]: I0320 08:35:39.145683 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555b9794f6-68k4f"] Mar 20 08:35:39.147368 master-0 kubenswrapper[7476]: E0320 08:35:39.145964 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-555b9794f6-68k4f" podUID="9983fdac-91cb-4f06-b39d-9306adef4071" Mar 20 08:35:39.466480 master-0 kubenswrapper[7476]: I0320 08:35:39.466357 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:35:39.523388 master-0 kubenswrapper[7476]: I0320 08:35:39.523336 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:39.531039 master-0 kubenswrapper[7476]: I0320 08:35:39.531006 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:39.625979 master-0 kubenswrapper[7476]: I0320 08:35:39.625924 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-trusted-ca-bundle\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.625979 master-0 kubenswrapper[7476]: I0320 08:35:39.625971 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-image-import-ca\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.626281 master-0 kubenswrapper[7476]: I0320 08:35:39.625993 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-node-pullsecrets\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.626281 master-0 kubenswrapper[7476]: I0320 08:35:39.626021 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-client\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.626281 master-0 kubenswrapper[7476]: I0320 08:35:39.626061 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-config\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.626281 master-0 kubenswrapper[7476]: I0320 08:35:39.626137 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.626573 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-encryption-config\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.626611 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-serving-ca\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.626650 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-audit-dir\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.626647 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.626677 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8vg9\" (UniqueName: \"kubernetes.io/projected/9983fdac-91cb-4f06-b39d-9306adef4071-kube-api-access-v8vg9\") pod \"9983fdac-91cb-4f06-b39d-9306adef4071\" (UID: \"9983fdac-91cb-4f06-b39d-9306adef4071\") " Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.627008 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.627037 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.627057 7476 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.627071 7476 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.627435 master-0 kubenswrapper[7476]: I0320 08:35:39.627081 7476 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.627787 master-0 kubenswrapper[7476]: I0320 08:35:39.626505 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-config" (OuterVolumeSpecName: "config") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:39.627787 master-0 kubenswrapper[7476]: I0320 08:35:39.627690 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:39.632309 master-0 kubenswrapper[7476]: I0320 08:35:39.632243 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:35:39.632436 master-0 kubenswrapper[7476]: I0320 08:35:39.632321 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9983fdac-91cb-4f06-b39d-9306adef4071-kube-api-access-v8vg9" (OuterVolumeSpecName: "kube-api-access-v8vg9") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "kube-api-access-v8vg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:39.633582 master-0 kubenswrapper[7476]: I0320 08:35:39.633508 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "9983fdac-91cb-4f06-b39d-9306adef4071" (UID: "9983fdac-91cb-4f06-b39d-9306adef4071"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:35:39.728635 master-0 kubenswrapper[7476]: I0320 08:35:39.728478 7476 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.728635 master-0 kubenswrapper[7476]: I0320 08:35:39.728513 7476 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.728635 master-0 kubenswrapper[7476]: I0320 08:35:39.728524 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.728635 master-0 kubenswrapper[7476]: I0320 08:35:39.728533 7476 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.728635 master-0 kubenswrapper[7476]: I0320 08:35:39.728543 7476 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9983fdac-91cb-4f06-b39d-9306adef4071-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.728635 master-0 kubenswrapper[7476]: I0320 08:35:39.728553 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8vg9\" (UniqueName: \"kubernetes.io/projected/9983fdac-91cb-4f06-b39d-9306adef4071-kube-api-access-v8vg9\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:39.933095 master-0 kubenswrapper[7476]: I0320 08:35:39.933025 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:39.933095 master-0 kubenswrapper[7476]: I0320 08:35:39.933101 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:39.933346 master-0 kubenswrapper[7476]: E0320 08:35:39.933222 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 20 08:35:39.933346 master-0 kubenswrapper[7476]: E0320 08:35:39.933294 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:47.933278634 +0000 UTC m=+28.902047160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : configmap "client-ca" not found Mar 20 08:35:39.933677 master-0 kubenswrapper[7476]: E0320 08:35:39.933559 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 20 08:35:39.933677 master-0 kubenswrapper[7476]: E0320 08:35:39.933651 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:47.933624643 +0000 UTC m=+28.902393169 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : secret "serving-cert" not found Mar 20 08:35:40.526836 master-0 kubenswrapper[7476]: I0320 08:35:40.526751 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-555b9794f6-68k4f" Mar 20 08:35:40.570447 master-0 kubenswrapper[7476]: I0320 08:35:40.570380 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-64b65cddf5-gx7h7"] Mar 20 08:35:40.571528 master-0 kubenswrapper[7476]: I0320 08:35:40.571157 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-555b9794f6-68k4f"] Mar 20 08:35:40.571528 master-0 kubenswrapper[7476]: I0320 08:35:40.571248 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.581336 master-0 kubenswrapper[7476]: I0320 08:35:40.580360 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 08:35:40.581336 master-0 kubenswrapper[7476]: I0320 08:35:40.580642 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 08:35:40.583795 master-0 kubenswrapper[7476]: I0320 08:35:40.583756 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 08:35:40.585782 master-0 kubenswrapper[7476]: I0320 08:35:40.585333 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 08:35:40.585782 master-0 kubenswrapper[7476]: I0320 08:35:40.585725 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 08:35:40.585948 master-0 kubenswrapper[7476]: I0320 08:35:40.585841 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 08:35:40.586099 master-0 kubenswrapper[7476]: I0320 08:35:40.585738 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 08:35:40.586475 master-0 kubenswrapper[7476]: I0320 08:35:40.586453 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 08:35:40.586691 master-0 kubenswrapper[7476]: I0320 08:35:40.586673 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 08:35:40.588623 master-0 kubenswrapper[7476]: I0320 08:35:40.588596 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-555b9794f6-68k4f"] Mar 20 08:35:40.599667 master-0 kubenswrapper[7476]: I0320 08:35:40.596705 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-64b65cddf5-gx7h7"] Mar 20 08:35:40.610476 master-0 kubenswrapper[7476]: I0320 08:35:40.608305 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 08:35:40.641435 master-0 kubenswrapper[7476]: I0320 08:35:40.641380 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-serving-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641435 master-0 kubenswrapper[7476]: I0320 08:35:40.641421 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-trusted-ca-bundle\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641435 master-0 kubenswrapper[7476]: I0320 08:35:40.641456 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-encryption-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641551 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit-dir\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641576 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-node-pullsecrets\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641642 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-client\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641668 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641690 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641715 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-image-import-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641748 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxqp4\" (UniqueName: \"kubernetes.io/projected/ca56e37d-80ea-432b-a6d9-f4e904a40e10-kube-api-access-jxqp4\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641771 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641822 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9983fdac-91cb-4f06-b39d-9306adef4071-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:40.641840 master-0 kubenswrapper[7476]: I0320 08:35:40.641838 7476 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/9983fdac-91cb-4f06-b39d-9306adef4071-audit\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:40.742916 master-0 kubenswrapper[7476]: I0320 08:35:40.742835 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.742916 master-0 kubenswrapper[7476]: I0320 08:35:40.742896 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.742916 master-0 kubenswrapper[7476]: I0320 08:35:40.742920 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-image-import-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.742961 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqp4\" (UniqueName: \"kubernetes.io/projected/ca56e37d-80ea-432b-a6d9-f4e904a40e10-kube-api-access-jxqp4\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.742984 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.743017 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-serving-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.743033 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-trusted-ca-bundle\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.743050 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-encryption-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.743115 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit-dir\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.743134 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-node-pullsecrets\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.743590 master-0 kubenswrapper[7476]: I0320 08:35:40.743185 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-client\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.744288 master-0 kubenswrapper[7476]: I0320 08:35:40.744211 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-image-import-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: I0320 08:35:40.744458 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: I0320 08:35:40.744496 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-serving-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: E0320 08:35:40.744519 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: I0320 08:35:40.744559 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit-dir\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: I0320 08:35:40.744566 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-node-pullsecrets\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: E0320 08:35:40.744609 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert podName:ca56e37d-80ea-432b-a6d9-f4e904a40e10 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:41.244585264 +0000 UTC m=+22.213353790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert") pod "apiserver-64b65cddf5-gx7h7" (UID: "ca56e37d-80ea-432b-a6d9-f4e904a40e10") : secret "serving-cert" not found Mar 20 08:35:40.744892 master-0 kubenswrapper[7476]: I0320 08:35:40.744751 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.745371 master-0 kubenswrapper[7476]: I0320 08:35:40.745323 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-trusted-ca-bundle\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.749580 master-0 kubenswrapper[7476]: I0320 08:35:40.749446 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-client\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.750022 master-0 kubenswrapper[7476]: I0320 08:35:40.749967 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-encryption-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:40.819653 master-0 kubenswrapper[7476]: I0320 08:35:40.819483 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqp4\" (UniqueName: \"kubernetes.io/projected/ca56e37d-80ea-432b-a6d9-f4e904a40e10-kube-api-access-jxqp4\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:41.243613 master-0 kubenswrapper[7476]: I0320 08:35:41.243532 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9983fdac-91cb-4f06-b39d-9306adef4071" path="/var/lib/kubelet/pods/9983fdac-91cb-4f06-b39d-9306adef4071/volumes" Mar 20 08:35:41.252786 master-0 kubenswrapper[7476]: I0320 08:35:41.252742 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:41.253659 master-0 kubenswrapper[7476]: E0320 08:35:41.253026 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:41.253659 master-0 kubenswrapper[7476]: E0320 08:35:41.253129 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert podName:ca56e37d-80ea-432b-a6d9-f4e904a40e10 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:42.253103916 +0000 UTC m=+23.221872442 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert") pod "apiserver-64b65cddf5-gx7h7" (UID: "ca56e37d-80ea-432b-a6d9-f4e904a40e10") : secret "serving-cert" not found Mar 20 08:35:41.531391 master-0 kubenswrapper[7476]: I0320 08:35:41.531167 7476 generic.go:334] "Generic (PLEG): container finished" podID="e9425526-9f51-4302-a19d-a8107f56c582" containerID="ede2ef38ba8d0fe732989d57db50e82ff2ef33b1e7f1869b8d140d9c93969650" exitCode=0 Mar 20 08:35:41.531391 master-0 kubenswrapper[7476]: I0320 08:35:41.531212 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerDied","Data":"ede2ef38ba8d0fe732989d57db50e82ff2ef33b1e7f1869b8d140d9c93969650"} Mar 20 08:35:41.532393 master-0 kubenswrapper[7476]: I0320 08:35:41.531594 7476 scope.go:117] "RemoveContainer" containerID="ede2ef38ba8d0fe732989d57db50e82ff2ef33b1e7f1869b8d140d9c93969650" Mar 20 08:35:42.273660 master-0 kubenswrapper[7476]: I0320 08:35:42.273587 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:42.273867 master-0 kubenswrapper[7476]: E0320 08:35:42.273776 7476 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 20 08:35:42.273867 master-0 kubenswrapper[7476]: E0320 08:35:42.273851 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert podName:ca56e37d-80ea-432b-a6d9-f4e904a40e10 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:44.273831777 +0000 UTC m=+25.242600303 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert") pod "apiserver-64b65cddf5-gx7h7" (UID: "ca56e37d-80ea-432b-a6d9-f4e904a40e10") : secret "serving-cert" not found Mar 20 08:35:43.232606 master-0 kubenswrapper[7476]: I0320 08:35:43.232521 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 20 08:35:43.234111 master-0 kubenswrapper[7476]: I0320 08:35:43.233342 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.240623 master-0 kubenswrapper[7476]: I0320 08:35:43.236229 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 20 08:35:43.255107 master-0 kubenswrapper[7476]: I0320 08:35:43.249564 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 20 08:35:43.297719 master-0 kubenswrapper[7476]: I0320 08:35:43.292053 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.297719 master-0 kubenswrapper[7476]: I0320 08:35:43.292122 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/169353ee-c927-4483-8976-b9ca08b0a6d1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.297719 master-0 kubenswrapper[7476]: I0320 08:35:43.292151 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-var-lock\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.393492 master-0 kubenswrapper[7476]: I0320 08:35:43.393303 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.393492 master-0 kubenswrapper[7476]: I0320 08:35:43.393369 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/169353ee-c927-4483-8976-b9ca08b0a6d1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.393492 master-0 kubenswrapper[7476]: I0320 08:35:43.393397 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-var-lock\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.393779 master-0 kubenswrapper[7476]: I0320 08:35:43.393576 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-var-lock\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.393779 master-0 kubenswrapper[7476]: I0320 08:35:43.393624 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.429961 master-0 kubenswrapper[7476]: I0320 08:35:43.429884 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/169353ee-c927-4483-8976-b9ca08b0a6d1-kube-api-access\") pod \"installer-1-master-0\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.555050 master-0 kubenswrapper[7476]: I0320 08:35:43.554927 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 20 08:35:43.629854 master-0 kubenswrapper[7476]: I0320 08:35:43.629765 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c9f7c448d-k8dq9"] Mar 20 08:35:43.630406 master-0 kubenswrapper[7476]: E0320 08:35:43.630323 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" podUID="76e0afae-e3d2-4eb1-825a-a8e5498e1d5c" Mar 20 08:35:43.643715 master-0 kubenswrapper[7476]: I0320 08:35:43.643620 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6"] Mar 20 08:35:43.644075 master-0 kubenswrapper[7476]: E0320 08:35:43.644036 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" podUID="44d1bab1-22a1-45f4-b722-afef91f56a31" Mar 20 08:35:44.305649 master-0 kubenswrapper[7476]: I0320 08:35:44.305242 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:44.310167 master-0 kubenswrapper[7476]: I0320 08:35:44.310118 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:44.505044 master-0 kubenswrapper[7476]: I0320 08:35:44.504958 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:35:44.542311 master-0 kubenswrapper[7476]: I0320 08:35:44.542137 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:44.542311 master-0 kubenswrapper[7476]: I0320 08:35:44.542217 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:44.554464 master-0 kubenswrapper[7476]: I0320 08:35:44.554300 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:44.560956 master-0 kubenswrapper[7476]: I0320 08:35:44.560797 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:44.609489 master-0 kubenswrapper[7476]: I0320 08:35:44.609434 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmsql\" (UniqueName: \"kubernetes.io/projected/44d1bab1-22a1-45f4-b722-afef91f56a31-kube-api-access-rmsql\") pod \"44d1bab1-22a1-45f4-b722-afef91f56a31\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " Mar 20 08:35:44.609697 master-0 kubenswrapper[7476]: I0320 08:35:44.609509 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-config\") pod \"44d1bab1-22a1-45f4-b722-afef91f56a31\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " Mar 20 08:35:44.610520 master-0 kubenswrapper[7476]: I0320 08:35:44.610466 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-config" (OuterVolumeSpecName: "config") pod "44d1bab1-22a1-45f4-b722-afef91f56a31" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:44.614437 master-0 kubenswrapper[7476]: I0320 08:35:44.614383 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d1bab1-22a1-45f4-b722-afef91f56a31-kube-api-access-rmsql" (OuterVolumeSpecName: "kube-api-access-rmsql") pod "44d1bab1-22a1-45f4-b722-afef91f56a31" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31"). InnerVolumeSpecName "kube-api-access-rmsql". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:44.711196 master-0 kubenswrapper[7476]: I0320 08:35:44.711138 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-config\") pod \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " Mar 20 08:35:44.711473 master-0 kubenswrapper[7476]: I0320 08:35:44.711278 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxz2v\" (UniqueName: \"kubernetes.io/projected/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-kube-api-access-bxz2v\") pod \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " Mar 20 08:35:44.711473 master-0 kubenswrapper[7476]: I0320 08:35:44.711313 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") pod \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " Mar 20 08:35:44.711473 master-0 kubenswrapper[7476]: I0320 08:35:44.711341 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-proxy-ca-bundles\") pod \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " Mar 20 08:35:44.711682 master-0 kubenswrapper[7476]: I0320 08:35:44.711661 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmsql\" (UniqueName: \"kubernetes.io/projected/44d1bab1-22a1-45f4-b722-afef91f56a31-kube-api-access-rmsql\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:44.711682 master-0 kubenswrapper[7476]: I0320 08:35:44.711680 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:44.711814 master-0 kubenswrapper[7476]: I0320 08:35:44.711769 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-config" (OuterVolumeSpecName: "config") pod "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:44.712423 master-0 kubenswrapper[7476]: I0320 08:35:44.712370 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:35:44.715224 master-0 kubenswrapper[7476]: I0320 08:35:44.715195 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:35:44.715504 master-0 kubenswrapper[7476]: I0320 08:35:44.715439 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-kube-api-access-bxz2v" (OuterVolumeSpecName: "kube-api-access-bxz2v") pod "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c"). InnerVolumeSpecName "kube-api-access-bxz2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:44.812729 master-0 kubenswrapper[7476]: I0320 08:35:44.812551 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bxz2v\" (UniqueName: \"kubernetes.io/projected/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-kube-api-access-bxz2v\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:44.812729 master-0 kubenswrapper[7476]: I0320 08:35:44.812587 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:44.812729 master-0 kubenswrapper[7476]: I0320 08:35:44.812595 7476 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:44.812729 master-0 kubenswrapper[7476]: I0320 08:35:44.812604 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:45.546820 master-0 kubenswrapper[7476]: I0320 08:35:45.546573 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:45.546820 master-0 kubenswrapper[7476]: I0320 08:35:45.546598 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:46.036570 master-0 kubenswrapper[7476]: I0320 08:35:46.036475 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") pod \"controller-manager-c9f7c448d-k8dq9\" (UID: \"76e0afae-e3d2-4eb1-825a-a8e5498e1d5c\") " pod="openshift-controller-manager/controller-manager-c9f7c448d-k8dq9" Mar 20 08:35:46.036939 master-0 kubenswrapper[7476]: E0320 08:35:46.036716 7476 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Mar 20 08:35:46.036939 master-0 kubenswrapper[7476]: E0320 08:35:46.036775 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca podName:76e0afae-e3d2-4eb1-825a-a8e5498e1d5c nodeName:}" failed. No retries permitted until 2026-03-20 08:36:02.036757277 +0000 UTC m=+43.005525823 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca") pod "controller-manager-c9f7c448d-k8dq9" (UID: "76e0afae-e3d2-4eb1-825a-a8e5498e1d5c") : object "openshift-controller-manager"/"client-ca" not registered Mar 20 08:35:47.499295 master-0 kubenswrapper[7476]: I0320 08:35:47.496086 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6"] Mar 20 08:35:47.499295 master-0 kubenswrapper[7476]: I0320 08:35:47.498846 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.501051 master-0 kubenswrapper[7476]: I0320 08:35:47.500772 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:35:47.506543 master-0 kubenswrapper[7476]: I0320 08:35:47.506501 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:35:47.506726 master-0 kubenswrapper[7476]: I0320 08:35:47.506566 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:35:47.508104 master-0 kubenswrapper[7476]: I0320 08:35:47.506950 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:35:47.508104 master-0 kubenswrapper[7476]: I0320 08:35:47.507044 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:35:47.510132 master-0 kubenswrapper[7476]: I0320 08:35:47.510012 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:35:47.580019 master-0 kubenswrapper[7476]: I0320 08:35:47.579945 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzt54\" (UniqueName: \"kubernetes.io/projected/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-kube-api-access-vzt54\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.580297 master-0 kubenswrapper[7476]: I0320 08:35:47.580182 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-serving-cert\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.580297 master-0 kubenswrapper[7476]: I0320 08:35:47.580248 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-proxy-ca-bundles\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.580591 master-0 kubenswrapper[7476]: I0320 08:35:47.580499 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-config\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.580725 master-0 kubenswrapper[7476]: I0320 08:35:47.580639 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-client-ca\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.681828 master-0 kubenswrapper[7476]: I0320 08:35:47.681711 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzt54\" (UniqueName: \"kubernetes.io/projected/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-kube-api-access-vzt54\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.682331 master-0 kubenswrapper[7476]: I0320 08:35:47.682248 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-serving-cert\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.682331 master-0 kubenswrapper[7476]: I0320 08:35:47.682319 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-proxy-ca-bundles\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.682504 master-0 kubenswrapper[7476]: I0320 08:35:47.682367 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-config\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.682504 master-0 kubenswrapper[7476]: I0320 08:35:47.682461 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-client-ca\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.683766 master-0 kubenswrapper[7476]: I0320 08:35:47.683724 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-client-ca\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.684156 master-0 kubenswrapper[7476]: I0320 08:35:47.684093 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-config\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.684776 master-0 kubenswrapper[7476]: I0320 08:35:47.684722 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-proxy-ca-bundles\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.686002 master-0 kubenswrapper[7476]: I0320 08:35:47.685923 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-serving-cert\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:47.987741 master-0 kubenswrapper[7476]: I0320 08:35:47.987533 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:47.988046 master-0 kubenswrapper[7476]: I0320 08:35:47.987976 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") pod \"route-controller-manager-6b966fd84d-tvkk6\" (UID: \"44d1bab1-22a1-45f4-b722-afef91f56a31\") " pod="openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6" Mar 20 08:35:47.988338 master-0 kubenswrapper[7476]: E0320 08:35:47.988226 7476 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: object "openshift-route-controller-manager"/"client-ca" not registered Mar 20 08:35:47.988471 master-0 kubenswrapper[7476]: E0320 08:35:47.988393 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:36:03.988350952 +0000 UTC m=+44.957119518 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : object "openshift-route-controller-manager"/"client-ca" not registered Mar 20 08:35:47.988611 master-0 kubenswrapper[7476]: E0320 08:35:47.988542 7476 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: object "openshift-route-controller-manager"/"serving-cert" not registered Mar 20 08:35:47.988704 master-0 kubenswrapper[7476]: E0320 08:35:47.988687 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert podName:44d1bab1-22a1-45f4-b722-afef91f56a31 nodeName:}" failed. No retries permitted until 2026-03-20 08:36:03.98865658 +0000 UTC m=+44.957425276 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert") pod "route-controller-manager-6b966fd84d-tvkk6" (UID: "44d1bab1-22a1-45f4-b722-afef91f56a31") : object "openshift-route-controller-manager"/"serving-cert" not registered Mar 20 08:35:48.847942 master-0 kubenswrapper[7476]: I0320 08:35:48.847841 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6"] Mar 20 08:35:48.851101 master-0 kubenswrapper[7476]: I0320 08:35:48.851030 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-c9f7c448d-k8dq9"] Mar 20 08:35:49.488922 master-0 kubenswrapper[7476]: I0320 08:35:49.488731 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzt54\" (UniqueName: \"kubernetes.io/projected/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-kube-api-access-vzt54\") pod \"controller-manager-5d9c65fcf4-x9fg6\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:49.507176 master-0 kubenswrapper[7476]: I0320 08:35:49.507099 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-c9f7c448d-k8dq9"] Mar 20 08:35:49.623061 master-0 kubenswrapper[7476]: I0320 08:35:49.622983 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:49.633127 master-0 kubenswrapper[7476]: I0320 08:35:49.633081 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:35:50.211312 master-0 kubenswrapper[7476]: I0320 08:35:50.209544 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2"] Mar 20 08:35:50.211312 master-0 kubenswrapper[7476]: I0320 08:35:50.210412 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.219722 master-0 kubenswrapper[7476]: I0320 08:35:50.219336 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:35:50.219722 master-0 kubenswrapper[7476]: I0320 08:35:50.219715 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:35:50.220078 master-0 kubenswrapper[7476]: I0320 08:35:50.219973 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:35:50.220334 master-0 kubenswrapper[7476]: I0320 08:35:50.220304 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:35:50.226609 master-0 kubenswrapper[7476]: I0320 08:35:50.226561 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:35:50.335252 master-0 kubenswrapper[7476]: I0320 08:35:50.335066 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-client-ca\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.335556 master-0 kubenswrapper[7476]: I0320 08:35:50.335299 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-config\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.335556 master-0 kubenswrapper[7476]: I0320 08:35:50.335426 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r687z\" (UniqueName: \"kubernetes.io/projected/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-kube-api-access-r687z\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.335556 master-0 kubenswrapper[7476]: I0320 08:35:50.335521 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-serving-cert\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.437402 master-0 kubenswrapper[7476]: I0320 08:35:50.437255 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-client-ca\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.437402 master-0 kubenswrapper[7476]: I0320 08:35:50.437417 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-config\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.437853 master-0 kubenswrapper[7476]: I0320 08:35:50.437474 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r687z\" (UniqueName: \"kubernetes.io/projected/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-kube-api-access-r687z\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.437853 master-0 kubenswrapper[7476]: I0320 08:35:50.437711 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-serving-cert\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.438407 master-0 kubenswrapper[7476]: I0320 08:35:50.438353 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-client-ca\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.438639 master-0 kubenswrapper[7476]: I0320 08:35:50.438570 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-config\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.445444 master-0 kubenswrapper[7476]: I0320 08:35:50.445380 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-serving-cert\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.491400 master-0 kubenswrapper[7476]: I0320 08:35:50.491305 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6"] Mar 20 08:35:50.495173 master-0 kubenswrapper[7476]: I0320 08:35:50.495099 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2"] Mar 20 08:35:50.696866 master-0 kubenswrapper[7476]: I0320 08:35:50.696802 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b966fd84d-tvkk6"] Mar 20 08:35:50.703767 master-0 kubenswrapper[7476]: I0320 08:35:50.703698 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-5595498c49-hrfrr"] Mar 20 08:35:50.704652 master-0 kubenswrapper[7476]: I0320 08:35:50.704578 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.710454 master-0 kubenswrapper[7476]: I0320 08:35:50.708427 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 08:35:50.710454 master-0 kubenswrapper[7476]: I0320 08:35:50.708801 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 08:35:50.710454 master-0 kubenswrapper[7476]: I0320 08:35:50.709051 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 08:35:50.710454 master-0 kubenswrapper[7476]: I0320 08:35:50.709349 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 08:35:50.713672 master-0 kubenswrapper[7476]: I0320 08:35:50.713170 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 08:35:50.713672 master-0 kubenswrapper[7476]: I0320 08:35:50.713451 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 08:35:50.713672 master-0 kubenswrapper[7476]: I0320 08:35:50.713657 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 08:35:50.713907 master-0 kubenswrapper[7476]: I0320 08:35:50.713681 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 08:35:50.737307 master-0 kubenswrapper[7476]: I0320 08:35:50.737176 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r687z\" (UniqueName: \"kubernetes.io/projected/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-kube-api-access-r687z\") pod \"route-controller-manager-7b6dd4d5b8-87rv2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.765053 master-0 kubenswrapper[7476]: I0320 08:35:50.751185 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-5595498c49-hrfrr"] Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.854877 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6a187d-5b25-4d63-939e-c04e07369371-audit-dir\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855376 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-trusted-ca-bundle\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855442 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-encryption-config\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855482 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-serving-ca\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855520 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br4bc\" (UniqueName: \"kubernetes.io/projected/6a6a187d-5b25-4d63-939e-c04e07369371-kube-api-access-br4bc\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855579 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-serving-cert\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855643 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-audit-policies\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855667 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-client\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855709 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d1bab1-22a1-45f4-b722-afef91f56a31-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:50.856032 master-0 kubenswrapper[7476]: I0320 08:35:50.855726 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d1bab1-22a1-45f4-b722-afef91f56a31-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:35:50.873394 master-0 kubenswrapper[7476]: I0320 08:35:50.860368 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.956719 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-encryption-config\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960170 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-serving-ca\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960257 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4bc\" (UniqueName: \"kubernetes.io/projected/6a6a187d-5b25-4d63-939e-c04e07369371-kube-api-access-br4bc\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960383 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-serving-cert\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960460 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-client\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960487 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-audit-policies\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960538 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6a187d-5b25-4d63-939e-c04e07369371-audit-dir\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.960567 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-trusted-ca-bundle\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.961835 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6a187d-5b25-4d63-939e-c04e07369371-audit-dir\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.961999 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-audit-policies\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.962305 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-serving-ca\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.962612 master-0 kubenswrapper[7476]: I0320 08:35:50.962599 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-trusted-ca-bundle\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.980509 master-0 kubenswrapper[7476]: I0320 08:35:50.963911 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-client\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.980509 master-0 kubenswrapper[7476]: I0320 08:35:50.971381 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-encryption-config\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.980509 master-0 kubenswrapper[7476]: I0320 08:35:50.977359 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-serving-cert\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:50.991343 master-0 kubenswrapper[7476]: I0320 08:35:50.991219 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4bc\" (UniqueName: \"kubernetes.io/projected/6a6a187d-5b25-4d63-939e-c04e07369371-kube-api-access-br4bc\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:51.007254 master-0 kubenswrapper[7476]: I0320 08:35:51.007211 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-64b65cddf5-gx7h7"] Mar 20 08:35:51.098997 master-0 kubenswrapper[7476]: I0320 08:35:51.098932 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:35:51.105667 master-0 kubenswrapper[7476]: I0320 08:35:51.104855 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6"] Mar 20 08:35:51.136252 master-0 kubenswrapper[7476]: I0320 08:35:51.126115 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 20 08:35:51.243677 master-0 kubenswrapper[7476]: I0320 08:35:51.243381 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d1bab1-22a1-45f4-b722-afef91f56a31" path="/var/lib/kubelet/pods/44d1bab1-22a1-45f4-b722-afef91f56a31/volumes" Mar 20 08:35:51.244811 master-0 kubenswrapper[7476]: I0320 08:35:51.244010 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e0afae-e3d2-4eb1-825a-a8e5498e1d5c" path="/var/lib/kubelet/pods/76e0afae-e3d2-4eb1-825a-a8e5498e1d5c/volumes" Mar 20 08:35:51.298141 master-0 kubenswrapper[7476]: I0320 08:35:51.297917 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-zgm52"] Mar 20 08:35:51.299569 master-0 kubenswrapper[7476]: I0320 08:35:51.298705 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.331156 master-0 kubenswrapper[7476]: I0320 08:35:51.329026 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2"] Mar 20 08:35:51.402296 master-0 kubenswrapper[7476]: I0320 08:35:51.401172 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-5595498c49-hrfrr"] Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501154 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-var-lib-kubelet\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501230 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-systemd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501345 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5wnd\" (UniqueName: \"kubernetes.io/projected/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-kube-api-access-w5wnd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501369 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysconfig\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501391 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-run\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501441 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-modprobe-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501458 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-lib-modules\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501487 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-host\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501510 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-tuned\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501531 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501554 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-sys\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501579 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-tmp\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501603 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-kubernetes\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.501748 master-0 kubenswrapper[7476]: I0320 08:35:51.501626 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-conf\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602223 master-0 kubenswrapper[7476]: I0320 08:35:51.602159 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-var-lib-kubelet\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602223 master-0 kubenswrapper[7476]: I0320 08:35:51.602228 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-systemd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602287 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wnd\" (UniqueName: \"kubernetes.io/projected/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-kube-api-access-w5wnd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602304 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-run\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602323 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysconfig\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602366 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-modprobe-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602381 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-lib-modules\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602410 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602425 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-host\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602439 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-tuned\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602457 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-sys\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602471 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-tmp\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602489 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-kubernetes\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602501 master-0 kubenswrapper[7476]: I0320 08:35:51.602502 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-conf\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.602835 master-0 kubenswrapper[7476]: I0320 08:35:51.602722 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-conf\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603099 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-var-lib-kubelet\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603460 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-sys\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603514 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-systemd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603562 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-run\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603509 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-kubernetes\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603475 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-lib-modules\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603521 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-host\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603467 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603640 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysconfig\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.603983 master-0 kubenswrapper[7476]: I0320 08:35:51.603675 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-modprobe-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.606778 master-0 kubenswrapper[7476]: I0320 08:35:51.606726 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" event={"ID":"6a6a187d-5b25-4d63-939e-c04e07369371","Type":"ContainerStarted","Data":"ef964aa716088965516a6b12f87facd648776f7eece032982375b00853e3a703"} Mar 20 08:35:51.611415 master-0 kubenswrapper[7476]: I0320 08:35:51.610221 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"bb58137a9975e369b5d22af63557b19c9ebf89c5b57408a1ff77493bf0b71c97"} Mar 20 08:35:51.611415 master-0 kubenswrapper[7476]: I0320 08:35:51.610336 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"e11ba7fccf3e3a03d9b7498dc0eb1bc10a9a5dcbb92c598146672eeafb4b1b79"} Mar 20 08:35:51.612131 master-0 kubenswrapper[7476]: I0320 08:35:51.611780 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-tuned\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.612517 master-0 kubenswrapper[7476]: I0320 08:35:51.612481 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-tmp\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.615754 master-0 kubenswrapper[7476]: I0320 08:35:51.615292 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" event={"ID":"57189f7c-5987-457d-a299-0a6b9bcb3e24","Type":"ContainerStarted","Data":"e48d6d8f20331461db8cc13ac230338d41f64ea61a17d13477ff37631d86ccc0"} Mar 20 08:35:51.622068 master-0 kubenswrapper[7476]: I0320 08:35:51.622012 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wnd\" (UniqueName: \"kubernetes.io/projected/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-kube-api-access-w5wnd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.626702 master-0 kubenswrapper[7476]: I0320 08:35:51.623720 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerStarted","Data":"b03cf792be8c09113845ece36250dd906916c30b14e37c0df43505b61e6139fa"} Mar 20 08:35:51.626702 master-0 kubenswrapper[7476]: I0320 08:35:51.624304 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:35:51.635745 master-0 kubenswrapper[7476]: I0320 08:35:51.635430 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" event={"ID":"ff2dfe9d-2834-43cb-b093-0831b2b87131","Type":"ContainerStarted","Data":"916ee61bba3dc6046a4302aa344d164e5c62669611430d505dfe331ff6648b85"} Mar 20 08:35:51.635745 master-0 kubenswrapper[7476]: I0320 08:35:51.635494 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" event={"ID":"ff2dfe9d-2834-43cb-b093-0831b2b87131","Type":"ContainerStarted","Data":"34116c989e6200289799cf6a068e33d84cdd4a6aebaa76c424e05c0548acfce2"} Mar 20 08:35:51.654246 master-0 kubenswrapper[7476]: I0320 08:35:51.654156 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerStarted","Data":"39979795a082384fa347e48c6bcdc4249850e6dc951d407d07457e2b43d36f11"} Mar 20 08:35:51.693633 master-0 kubenswrapper[7476]: I0320 08:35:51.690503 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" event={"ID":"0636bf2d-3ba2-4f2b-9f8d-da11f4507985","Type":"ContainerStarted","Data":"b8cef6d632932f41894e048ea12376ac52c931e5410c8eae315d28796ea0835f"} Mar 20 08:35:51.705743 master-0 kubenswrapper[7476]: I0320 08:35:51.705699 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" event={"ID":"59a2bbd7-3b08-45cf-9a7c-542effc09ec2","Type":"ContainerStarted","Data":"896c3c181e0c584b66757229666574be4f2b845846f02e7d0e9eb6d3a1c7d8f2"} Mar 20 08:35:51.714825 master-0 kubenswrapper[7476]: I0320 08:35:51.714781 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"169353ee-c927-4483-8976-b9ca08b0a6d1","Type":"ContainerStarted","Data":"c35a5738f2f9a6fb340b75e09b70d5c9961a967d646e1417a2634fd74ebeb167"} Mar 20 08:35:51.714825 master-0 kubenswrapper[7476]: I0320 08:35:51.714828 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"169353ee-c927-4483-8976-b9ca08b0a6d1","Type":"ContainerStarted","Data":"eb298360c7626b678f9c8cf233db291ec09731cb94cf6c1ae69432ca7d42b080"} Mar 20 08:35:51.716441 master-0 kubenswrapper[7476]: I0320 08:35:51.716415 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" event={"ID":"3776fdb6-25a1-4e3d-bdd1-437c69af3a55","Type":"ContainerStarted","Data":"cf5b5ccbf7753484bbfdedddf72e688b03e3f051328fd147e73fa02f3c9ead8d"} Mar 20 08:35:51.718168 master-0 kubenswrapper[7476]: I0320 08:35:51.718134 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" event={"ID":"6d26f719-43b9-4c1c-9a54-ff800177db68","Type":"ContainerStarted","Data":"1016c20b30300a724092253f38d19d884841e5634e7a9695b858976d92da0845"} Mar 20 08:35:52.066043 master-0 kubenswrapper[7476]: I0320 08:35:52.063959 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=9.063938464 podStartE2EDuration="9.063938464s" podCreationTimestamp="2026-03-20 08:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:51.881559183 +0000 UTC m=+32.850327729" watchObservedRunningTime="2026-03-20 08:35:52.063938464 +0000 UTC m=+33.032706990" Mar 20 08:35:52.066043 master-0 kubenswrapper[7476]: I0320 08:35:52.065329 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-gskz6"] Mar 20 08:35:52.066043 master-0 kubenswrapper[7476]: I0320 08:35:52.066043 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.070504 master-0 kubenswrapper[7476]: I0320 08:35:52.068441 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 08:35:52.070504 master-0 kubenswrapper[7476]: I0320 08:35:52.069623 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 08:35:52.074450 master-0 kubenswrapper[7476]: I0320 08:35:52.072598 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 08:35:52.074450 master-0 kubenswrapper[7476]: I0320 08:35:52.074322 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 08:35:52.090292 master-0 kubenswrapper[7476]: I0320 08:35:52.084730 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gskz6"] Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.113741 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.113803 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.113854 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.113888 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.113939 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.113984 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.114016 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:52.118043 master-0 kubenswrapper[7476]: I0320 08:35:52.114042 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:52.122241 master-0 kubenswrapper[7476]: I0320 08:35:52.120547 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:52.125701 master-0 kubenswrapper[7476]: I0320 08:35:52.125333 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:52.125701 master-0 kubenswrapper[7476]: I0320 08:35:52.125632 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:52.125986 master-0 kubenswrapper[7476]: I0320 08:35:52.125942 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:52.127159 master-0 kubenswrapper[7476]: I0320 08:35:52.127115 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:52.127771 master-0 kubenswrapper[7476]: I0320 08:35:52.127724 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:52.135041 master-0 kubenswrapper[7476]: I0320 08:35:52.134570 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:52.135817 master-0 kubenswrapper[7476]: I0320 08:35:52.135628 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:52.215425 master-0 kubenswrapper[7476]: I0320 08:35:52.214884 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.215425 master-0 kubenswrapper[7476]: I0320 08:35:52.214995 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbtnq\" (UniqueName: \"kubernetes.io/projected/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-kube-api-access-dbtnq\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.215674 master-0 kubenswrapper[7476]: I0320 08:35:52.215439 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-config-volume\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.276409 master-0 kubenswrapper[7476]: I0320 08:35:52.275520 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:35:52.276409 master-0 kubenswrapper[7476]: I0320 08:35:52.275759 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:35:52.276409 master-0 kubenswrapper[7476]: I0320 08:35:52.276236 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:35:52.277489 master-0 kubenswrapper[7476]: I0320 08:35:52.276479 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:35:52.279439 master-0 kubenswrapper[7476]: I0320 08:35:52.279212 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:35:52.289825 master-0 kubenswrapper[7476]: I0320 08:35:52.289763 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:35:52.296344 master-0 kubenswrapper[7476]: I0320 08:35:52.293688 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 20 08:35:52.296344 master-0 kubenswrapper[7476]: I0320 08:35:52.294430 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.296344 master-0 kubenswrapper[7476]: I0320 08:35:52.296329 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:35:52.298383 master-0 kubenswrapper[7476]: I0320 08:35:52.297003 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:35:52.299899 master-0 kubenswrapper[7476]: I0320 08:35:52.299860 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 20 08:35:52.310600 master-0 kubenswrapper[7476]: I0320 08:35:52.310275 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 20 08:35:52.317442 master-0 kubenswrapper[7476]: I0320 08:35:52.317156 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtnq\" (UniqueName: \"kubernetes.io/projected/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-kube-api-access-dbtnq\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.317442 master-0 kubenswrapper[7476]: I0320 08:35:52.317226 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-config-volume\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.317442 master-0 kubenswrapper[7476]: I0320 08:35:52.317285 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.317442 master-0 kubenswrapper[7476]: E0320 08:35:52.317381 7476 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 20 08:35:52.317442 master-0 kubenswrapper[7476]: E0320 08:35:52.317435 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls podName:41253bde-5d09-4ff0-8e7c-4a21fe2b7106 nodeName:}" failed. No retries permitted until 2026-03-20 08:35:52.817416146 +0000 UTC m=+33.786184672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls") pod "dns-default-gskz6" (UID: "41253bde-5d09-4ff0-8e7c-4a21fe2b7106") : secret "dns-default-metrics-tls" not found Mar 20 08:35:52.320726 master-0 kubenswrapper[7476]: I0320 08:35:52.318778 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-config-volume\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.348259 master-0 kubenswrapper[7476]: I0320 08:35:52.345142 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtnq\" (UniqueName: \"kubernetes.io/projected/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-kube-api-access-dbtnq\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.418842 master-0 kubenswrapper[7476]: I0320 08:35:52.418781 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.418842 master-0 kubenswrapper[7476]: I0320 08:35:52.418831 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-var-lock\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.418842 master-0 kubenswrapper[7476]: I0320 08:35:52.418859 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c80fea3f-9ac4-4060-bb90-19f9de724299-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.464914 master-0 kubenswrapper[7476]: I0320 08:35:52.463481 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-j7ngf"] Mar 20 08:35:52.464914 master-0 kubenswrapper[7476]: I0320 08:35:52.464183 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522187 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c80fea3f-9ac4-4060-bb90-19f9de724299-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522317 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvxjl\" (UniqueName: \"kubernetes.io/projected/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-kube-api-access-fvxjl\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522350 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522367 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-hosts-file\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522389 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-var-lock\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522458 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-var-lock\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.527394 master-0 kubenswrapper[7476]: I0320 08:35:52.522758 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.575203 master-0 kubenswrapper[7476]: I0320 08:35:52.571469 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c80fea3f-9ac4-4060-bb90-19f9de724299-kube-api-access\") pod \"installer-1-master-0\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.633231 master-0 kubenswrapper[7476]: I0320 08:35:52.630561 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:35:52.633231 master-0 kubenswrapper[7476]: I0320 08:35:52.631480 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q"] Mar 20 08:35:52.633231 master-0 kubenswrapper[7476]: I0320 08:35:52.631804 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxjl\" (UniqueName: \"kubernetes.io/projected/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-kube-api-access-fvxjl\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.633231 master-0 kubenswrapper[7476]: I0320 08:35:52.631839 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-hosts-file\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.633231 master-0 kubenswrapper[7476]: I0320 08:35:52.632056 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-hosts-file\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.678702 master-0 kubenswrapper[7476]: I0320 08:35:52.678408 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxjl\" (UniqueName: \"kubernetes.io/projected/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-kube-api-access-fvxjl\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.729877 master-0 kubenswrapper[7476]: I0320 08:35:52.729679 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" event={"ID":"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777","Type":"ContainerStarted","Data":"01e8cf94507b9386c5036a989e6960cf6155ad61352527634f11a8530a65c542"} Mar 20 08:35:52.729877 master-0 kubenswrapper[7476]: I0320 08:35:52.729767 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" event={"ID":"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777","Type":"ContainerStarted","Data":"12a5bcfb40c6199c579bd08c62b8bf6bb5bfdc6c365125f24ebc7113f94fcd35"} Mar 20 08:35:52.754643 master-0 kubenswrapper[7476]: I0320 08:35:52.754565 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" podStartSLOduration=1.7545388100000001 podStartE2EDuration="1.75453881s" podCreationTimestamp="2026-03-20 08:35:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:52.750066554 +0000 UTC m=+33.718835080" watchObservedRunningTime="2026-03-20 08:35:52.75453881 +0000 UTC m=+33.723307326" Mar 20 08:35:52.825331 master-0 kubenswrapper[7476]: I0320 08:35:52.824626 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:35:52.844130 master-0 kubenswrapper[7476]: I0320 08:35:52.844063 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.861743 master-0 kubenswrapper[7476]: I0320 08:35:52.858978 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:52.918693 master-0 kubenswrapper[7476]: W0320 08:35:52.909550 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae4d0d7_67e6_4e0c_9265_8e48ac2d4cbf.slice/crio-7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5 WatchSource:0}: Error finding container 7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5: Status 404 returned error can't find the container with id 7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5 Mar 20 08:35:52.967414 master-0 kubenswrapper[7476]: I0320 08:35:52.967356 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf"] Mar 20 08:35:53.016978 master-0 kubenswrapper[7476]: I0320 08:35:53.016815 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-gskz6" Mar 20 08:35:53.022364 master-0 kubenswrapper[7476]: I0320 08:35:53.022165 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-nfrth"] Mar 20 08:35:53.060432 master-0 kubenswrapper[7476]: I0320 08:35:53.060338 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98"] Mar 20 08:35:53.076232 master-0 kubenswrapper[7476]: I0320 08:35:53.076184 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 20 08:35:53.086255 master-0 kubenswrapper[7476]: W0320 08:35:53.085300 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ce482dc_d0ac_40bc_9058_a1cfdc81575e.slice/crio-c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a WatchSource:0}: Error finding container c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a: Status 404 returned error can't find the container with id c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a Mar 20 08:35:53.091186 master-0 kubenswrapper[7476]: I0320 08:35:53.090730 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t"] Mar 20 08:35:53.097509 master-0 kubenswrapper[7476]: W0320 08:35:53.097435 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ab32efc_7cc5_4e36_9c1c_05efb19914e2.slice/crio-a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf WatchSource:0}: Error finding container a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf: Status 404 returned error can't find the container with id a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf Mar 20 08:35:53.109623 master-0 kubenswrapper[7476]: I0320 08:35:53.109558 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6"] Mar 20 08:35:53.121936 master-0 kubenswrapper[7476]: W0320 08:35:53.121868 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5707066a_bd66_41bc_8cea_cff1630ab5ee.slice/crio-33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e WatchSource:0}: Error finding container 33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e: Status 404 returned error can't find the container with id 33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e Mar 20 08:35:53.229567 master-0 kubenswrapper[7476]: I0320 08:35:53.228132 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2"] Mar 20 08:35:53.250819 master-0 kubenswrapper[7476]: I0320 08:35:53.250769 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-89ccd998f-j84r8"] Mar 20 08:35:53.252199 master-0 kubenswrapper[7476]: W0320 08:35:53.252143 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf202273a_b111_46ce_b404_7e481d2c7ff9.slice/crio-60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510 WatchSource:0}: Error finding container 60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510: Status 404 returned error can't find the container with id 60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510 Mar 20 08:35:53.284966 master-0 kubenswrapper[7476]: W0320 08:35:53.282146 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23003a2f_2053_47cc_8133_23eb886d4da0.slice/crio-4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97 WatchSource:0}: Error finding container 4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97: Status 404 returned error can't find the container with id 4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97 Mar 20 08:35:53.314923 master-0 kubenswrapper[7476]: I0320 08:35:53.310011 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-gskz6"] Mar 20 08:35:53.319917 master-0 kubenswrapper[7476]: W0320 08:35:53.319865 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41253bde_5d09_4ff0_8e7c_4a21fe2b7106.slice/crio-a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875 WatchSource:0}: Error finding container a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875: Status 404 returned error can't find the container with id a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875 Mar 20 08:35:53.744874 master-0 kubenswrapper[7476]: I0320 08:35:53.744811 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerStarted","Data":"b1a6bfe0069db4370471806f444b8cbb38ac33f0aab60a3239aafba8901aaf7e"} Mar 20 08:35:53.750164 master-0 kubenswrapper[7476]: I0320 08:35:53.749797 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-j7ngf" event={"ID":"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf","Type":"ContainerStarted","Data":"8411d2dd0c86e582653139cf6127982eee541685df953056b9260c08a6ac30e6"} Mar 20 08:35:53.750164 master-0 kubenswrapper[7476]: I0320 08:35:53.749868 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-j7ngf" event={"ID":"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf","Type":"ContainerStarted","Data":"7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5"} Mar 20 08:35:53.754253 master-0 kubenswrapper[7476]: I0320 08:35:53.754084 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" event={"ID":"7ab32efc-7cc5-4e36-9c1c-05efb19914e2","Type":"ContainerStarted","Data":"a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf"} Mar 20 08:35:53.763937 master-0 kubenswrapper[7476]: I0320 08:35:53.763772 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-j7ngf" podStartSLOduration=1.763757322 podStartE2EDuration="1.763757322s" podCreationTimestamp="2026-03-20 08:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:53.763026534 +0000 UTC m=+34.731795080" watchObservedRunningTime="2026-03-20 08:35:53.763757322 +0000 UTC m=+34.732525848" Mar 20 08:35:53.766317 master-0 kubenswrapper[7476]: I0320 08:35:53.766246 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510"} Mar 20 08:35:53.770196 master-0 kubenswrapper[7476]: I0320 08:35:53.769715 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" event={"ID":"0e79950f-50a5-46ec-b836-7a35dcce2851","Type":"ContainerStarted","Data":"8c017564ccaf2dfeb210955c0086b315e3a3b5eaf1252770e0ae8f1b0562ed57"} Mar 20 08:35:53.770196 master-0 kubenswrapper[7476]: I0320 08:35:53.769782 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" event={"ID":"0e79950f-50a5-46ec-b836-7a35dcce2851","Type":"ContainerStarted","Data":"f8f3e1fa6ad1dbd5474f44502cbcf37e1e64719e20d78c379498d77edb6fab10"} Mar 20 08:35:53.772143 master-0 kubenswrapper[7476]: I0320 08:35:53.772099 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gskz6" event={"ID":"41253bde-5d09-4ff0-8e7c-4a21fe2b7106","Type":"ContainerStarted","Data":"a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875"} Mar 20 08:35:53.783423 master-0 kubenswrapper[7476]: I0320 08:35:53.783211 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" event={"ID":"5707066a-bd66-41bc-8cea-cff1630ab5ee","Type":"ContainerStarted","Data":"33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e"} Mar 20 08:35:53.785440 master-0 kubenswrapper[7476]: I0320 08:35:53.785045 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" event={"ID":"9ce482dc-d0ac-40bc-9058-a1cfdc81575e","Type":"ContainerStarted","Data":"c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a"} Mar 20 08:35:53.790841 master-0 kubenswrapper[7476]: I0320 08:35:53.790778 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerStarted","Data":"4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97"} Mar 20 08:35:53.794984 master-0 kubenswrapper[7476]: I0320 08:35:53.794935 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nfrth" event={"ID":"00350ac7-b40a-4459-b94c-a37d7b613645","Type":"ContainerStarted","Data":"aab851b1602b7dcc6e5620b34b9265b9ec9a6fe42b3748c9be972ac30f7ef4fd"} Mar 20 08:35:53.800779 master-0 kubenswrapper[7476]: I0320 08:35:53.800729 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"c80fea3f-9ac4-4060-bb90-19f9de724299","Type":"ContainerStarted","Data":"17798884b9a2e50bed959f2a24ce2f0fe9b1568f4973ec8270b3b7b5e3bb3b54"} Mar 20 08:35:53.800862 master-0 kubenswrapper[7476]: I0320 08:35:53.800786 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"c80fea3f-9ac4-4060-bb90-19f9de724299","Type":"ContainerStarted","Data":"3ea9cee0c994dbe6e138849417de7bc9a547b0875e9ae330a4c86ebfb3de2653"} Mar 20 08:35:53.820016 master-0 kubenswrapper[7476]: I0320 08:35:53.819901 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=1.819839414 podStartE2EDuration="1.819839414s" podCreationTimestamp="2026-03-20 08:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:35:53.817523314 +0000 UTC m=+34.786291840" watchObservedRunningTime="2026-03-20 08:35:53.819839414 +0000 UTC m=+34.788607940" Mar 20 08:35:58.602125 master-0 kubenswrapper[7476]: I0320 08:35:58.602060 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 20 08:35:58.603165 master-0 kubenswrapper[7476]: I0320 08:35:58.603138 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 20 08:35:58.603275 master-0 kubenswrapper[7476]: I0320 08:35:58.603243 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.605699 master-0 kubenswrapper[7476]: I0320 08:35:58.605407 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 08:35:58.754097 master-0 kubenswrapper[7476]: I0320 08:35:58.752666 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce21ae1-63de-49be-a027-084a101e650b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.754097 master-0 kubenswrapper[7476]: I0320 08:35:58.752758 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-var-lock\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.754097 master-0 kubenswrapper[7476]: I0320 08:35:58.752787 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.854547 master-0 kubenswrapper[7476]: I0320 08:35:58.854413 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-var-lock\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.854547 master-0 kubenswrapper[7476]: I0320 08:35:58.854474 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.854547 master-0 kubenswrapper[7476]: I0320 08:35:58.854505 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce21ae1-63de-49be-a027-084a101e650b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.854918 master-0 kubenswrapper[7476]: I0320 08:35:58.854887 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-var-lock\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.854973 master-0 kubenswrapper[7476]: I0320 08:35:58.854938 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.877547 master-0 kubenswrapper[7476]: I0320 08:35:58.877490 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce21ae1-63de-49be-a027-084a101e650b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:58.933303 master-0 kubenswrapper[7476]: I0320 08:35:58.933044 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:35:59.878445 master-0 kubenswrapper[7476]: I0320 08:35:59.878382 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:36:00.511586 master-0 kubenswrapper[7476]: I0320 08:36:00.511484 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 20 08:36:00.512491 master-0 kubenswrapper[7476]: I0320 08:36:00.512070 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.513851 master-0 kubenswrapper[7476]: I0320 08:36:00.513790 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 08:36:00.574372 master-0 kubenswrapper[7476]: I0320 08:36:00.574243 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-var-lock\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.574676 master-0 kubenswrapper[7476]: I0320 08:36:00.574406 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.574676 master-0 kubenswrapper[7476]: I0320 08:36:00.574439 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.582435 master-0 kubenswrapper[7476]: I0320 08:36:00.582383 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 20 08:36:00.582629 master-0 kubenswrapper[7476]: I0320 08:36:00.582595 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="c80fea3f-9ac4-4060-bb90-19f9de724299" containerName="installer" containerID="cri-o://17798884b9a2e50bed959f2a24ce2f0fe9b1568f4973ec8270b3b7b5e3bb3b54" gracePeriod=30 Mar 20 08:36:00.634777 master-0 kubenswrapper[7476]: I0320 08:36:00.634246 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 20 08:36:00.675977 master-0 kubenswrapper[7476]: I0320 08:36:00.675883 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-var-lock\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.676158 master-0 kubenswrapper[7476]: I0320 08:36:00.676030 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.676158 master-0 kubenswrapper[7476]: I0320 08:36:00.676065 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.676456 master-0 kubenswrapper[7476]: I0320 08:36:00.676316 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.676605 master-0 kubenswrapper[7476]: I0320 08:36:00.676518 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-var-lock\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.714719 master-0 kubenswrapper[7476]: I0320 08:36:00.714680 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kube-api-access\") pod \"installer-1-master-0\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:00.827373 master-0 kubenswrapper[7476]: I0320 08:36:00.827234 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:02.651316 master-0 kubenswrapper[7476]: I0320 08:36:02.650442 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj"] Mar 20 08:36:02.658700 master-0 kubenswrapper[7476]: I0320 08:36:02.653107 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.658700 master-0 kubenswrapper[7476]: I0320 08:36:02.658581 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 20 08:36:02.658700 master-0 kubenswrapper[7476]: I0320 08:36:02.658652 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 20 08:36:02.659159 master-0 kubenswrapper[7476]: I0320 08:36:02.658711 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 20 08:36:02.673360 master-0 kubenswrapper[7476]: I0320 08:36:02.672475 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 20 08:36:02.711313 master-0 kubenswrapper[7476]: I0320 08:36:02.711161 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.711581 master-0 kubenswrapper[7476]: I0320 08:36:02.711396 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08d9196b-b68f-421b-8754-bfbaa4020a97-cache\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.711581 master-0 kubenswrapper[7476]: I0320 08:36:02.711474 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-kube-api-access-tvqv5\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.711857 master-0 kubenswrapper[7476]: I0320 08:36:02.711664 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.711857 master-0 kubenswrapper[7476]: I0320 08:36:02.711746 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.711857 master-0 kubenswrapper[7476]: I0320 08:36:02.711809 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813654 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813706 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813733 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813771 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08d9196b-b68f-421b-8754-bfbaa4020a97-cache\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813797 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-kube-api-access-tvqv5\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813849 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.813964 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.814309 master-0 kubenswrapper[7476]: I0320 08:36:02.814067 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.815220 master-0 kubenswrapper[7476]: I0320 08:36:02.815175 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08d9196b-b68f-421b-8754-bfbaa4020a97-cache\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.817700 master-0 kubenswrapper[7476]: I0320 08:36:02.817613 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.820071 master-0 kubenswrapper[7476]: I0320 08:36:02.820034 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.969809 master-0 kubenswrapper[7476]: I0320 08:36:02.969688 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj"] Mar 20 08:36:02.971903 master-0 kubenswrapper[7476]: I0320 08:36:02.971865 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 20 08:36:02.973090 master-0 kubenswrapper[7476]: I0320 08:36:02.973065 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:02.976169 master-0 kubenswrapper[7476]: I0320 08:36:02.976103 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-kube-api-access-tvqv5\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:02.989772 master-0 kubenswrapper[7476]: I0320 08:36:02.989725 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:03.017515 master-0 kubenswrapper[7476]: I0320 08:36:03.017478 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.017636 master-0 kubenswrapper[7476]: I0320 08:36:03.017544 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f797a39-3da6-49d7-8275-672dedbfb3cb-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.017636 master-0 kubenswrapper[7476]: I0320 08:36:03.017593 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-var-lock\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.072037 master-0 kubenswrapper[7476]: I0320 08:36:03.071962 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 20 08:36:03.125796 master-0 kubenswrapper[7476]: I0320 08:36:03.124741 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f797a39-3da6-49d7-8275-672dedbfb3cb-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.125796 master-0 kubenswrapper[7476]: I0320 08:36:03.124826 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-var-lock\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.125796 master-0 kubenswrapper[7476]: I0320 08:36:03.124882 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.126281 master-0 kubenswrapper[7476]: I0320 08:36:03.126043 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-var-lock\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.126281 master-0 kubenswrapper[7476]: I0320 08:36:03.126049 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.150328 master-0 kubenswrapper[7476]: I0320 08:36:03.149952 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz"] Mar 20 08:36:03.150883 master-0 kubenswrapper[7476]: I0320 08:36:03.150851 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.153328 master-0 kubenswrapper[7476]: I0320 08:36:03.153074 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 20 08:36:03.153610 master-0 kubenswrapper[7476]: I0320 08:36:03.153571 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 20 08:36:03.163565 master-0 kubenswrapper[7476]: I0320 08:36:03.163506 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 20 08:36:03.170057 master-0 kubenswrapper[7476]: I0320 08:36:03.168569 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f797a39-3da6-49d7-8275-672dedbfb3cb-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.179292 master-0 kubenswrapper[7476]: I0320 08:36:03.176720 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz"] Mar 20 08:36:03.233521 master-0 kubenswrapper[7476]: I0320 08:36:03.232808 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4w7k\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-kube-api-access-l4w7k\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.233521 master-0 kubenswrapper[7476]: I0320 08:36:03.232901 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.233521 master-0 kubenswrapper[7476]: I0320 08:36:03.233051 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.233521 master-0 kubenswrapper[7476]: I0320 08:36:03.233209 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.233521 master-0 kubenswrapper[7476]: I0320 08:36:03.233238 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-cache\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.298311 master-0 kubenswrapper[7476]: I0320 08:36:03.298133 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6"] Mar 20 08:36:03.311227 master-0 kubenswrapper[7476]: I0320 08:36:03.311188 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:03.330510 master-0 kubenswrapper[7476]: I0320 08:36:03.330468 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2"] Mar 20 08:36:03.334819 master-0 kubenswrapper[7476]: I0320 08:36:03.334782 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.335038 master-0 kubenswrapper[7476]: I0320 08:36:03.335020 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.335130 master-0 kubenswrapper[7476]: I0320 08:36:03.335116 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.335211 master-0 kubenswrapper[7476]: I0320 08:36:03.335187 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-cache\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.335308 master-0 kubenswrapper[7476]: I0320 08:36:03.335295 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4w7k\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-kube-api-access-l4w7k\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.337078 master-0 kubenswrapper[7476]: I0320 08:36:03.337060 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.337203 master-0 kubenswrapper[7476]: I0320 08:36:03.337189 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.337608 master-0 kubenswrapper[7476]: I0320 08:36:03.337593 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-cache\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.341871 master-0 kubenswrapper[7476]: I0320 08:36:03.341845 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.370284 master-0 kubenswrapper[7476]: I0320 08:36:03.366865 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4w7k\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-kube-api-access-l4w7k\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:03.487378 master-0 kubenswrapper[7476]: I0320 08:36:03.487285 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:07.051160 master-0 kubenswrapper[7476]: I0320 08:36:07.050341 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4"] Mar 20 08:36:07.051160 master-0 kubenswrapper[7476]: I0320 08:36:07.050660 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" podUID="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" containerName="cluster-version-operator" containerID="cri-o://cf5b5ccbf7753484bbfdedddf72e688b03e3f051328fd147e73fa02f3c9ead8d" gracePeriod=130 Mar 20 08:36:08.195823 master-0 kubenswrapper[7476]: I0320 08:36:08.195772 7476 generic.go:334] "Generic (PLEG): container finished" podID="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" containerID="cf5b5ccbf7753484bbfdedddf72e688b03e3f051328fd147e73fa02f3c9ead8d" exitCode=0 Mar 20 08:36:08.197326 master-0 kubenswrapper[7476]: I0320 08:36:08.195826 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" event={"ID":"3776fdb6-25a1-4e3d-bdd1-437c69af3a55","Type":"ContainerDied","Data":"cf5b5ccbf7753484bbfdedddf72e688b03e3f051328fd147e73fa02f3c9ead8d"} Mar 20 08:36:08.253344 master-0 kubenswrapper[7476]: I0320 08:36:08.251617 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:36:08.309926 master-0 kubenswrapper[7476]: I0320 08:36:08.309881 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") pod \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " Mar 20 08:36:08.310038 master-0 kubenswrapper[7476]: I0320 08:36:08.309964 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") pod \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " Mar 20 08:36:08.310038 master-0 kubenswrapper[7476]: I0320 08:36:08.310003 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") pod \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " Mar 20 08:36:08.310038 master-0 kubenswrapper[7476]: I0320 08:36:08.310006 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "3776fdb6-25a1-4e3d-bdd1-437c69af3a55" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:08.310211 master-0 kubenswrapper[7476]: I0320 08:36:08.310033 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") pod \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " Mar 20 08:36:08.310211 master-0 kubenswrapper[7476]: I0320 08:36:08.310199 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") pod \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\" (UID: \"3776fdb6-25a1-4e3d-bdd1-437c69af3a55\") " Mar 20 08:36:08.310705 master-0 kubenswrapper[7476]: I0320 08:36:08.310674 7476 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:08.310768 master-0 kubenswrapper[7476]: I0320 08:36:08.310717 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "3776fdb6-25a1-4e3d-bdd1-437c69af3a55" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:08.312601 master-0 kubenswrapper[7476]: I0320 08:36:08.312565 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca" (OuterVolumeSpecName: "service-ca") pod "3776fdb6-25a1-4e3d-bdd1-437c69af3a55" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:36:08.315811 master-0 kubenswrapper[7476]: I0320 08:36:08.314510 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3776fdb6-25a1-4e3d-bdd1-437c69af3a55" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:08.372483 master-0 kubenswrapper[7476]: I0320 08:36:08.372442 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3776fdb6-25a1-4e3d-bdd1-437c69af3a55" (UID: "3776fdb6-25a1-4e3d-bdd1-437c69af3a55"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:36:08.415323 master-0 kubenswrapper[7476]: I0320 08:36:08.412078 7476 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:08.415323 master-0 kubenswrapper[7476]: I0320 08:36:08.412114 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:08.415323 master-0 kubenswrapper[7476]: I0320 08:36:08.412127 7476 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:08.415323 master-0 kubenswrapper[7476]: I0320 08:36:08.412139 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3776fdb6-25a1-4e3d-bdd1-437c69af3a55-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:08.647007 master-0 kubenswrapper[7476]: I0320 08:36:08.646966 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 20 08:36:08.716334 master-0 kubenswrapper[7476]: I0320 08:36:08.715694 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 20 08:36:08.747335 master-0 kubenswrapper[7476]: I0320 08:36:08.747291 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 20 08:36:08.749842 master-0 kubenswrapper[7476]: W0320 08:36:08.748171 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod3f797a39_3da6_49d7_8275_672dedbfb3cb.slice/crio-38d2ed2b0903d82b0cee6026db795c5066d5e62fc2a83e9dd29918947289fc1b WatchSource:0}: Error finding container 38d2ed2b0903d82b0cee6026db795c5066d5e62fc2a83e9dd29918947289fc1b: Status 404 returned error can't find the container with id 38d2ed2b0903d82b0cee6026db795c5066d5e62fc2a83e9dd29918947289fc1b Mar 20 08:36:08.754527 master-0 kubenswrapper[7476]: I0320 08:36:08.753846 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz"] Mar 20 08:36:08.758929 master-0 kubenswrapper[7476]: I0320 08:36:08.758897 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj"] Mar 20 08:36:09.271245 master-0 kubenswrapper[7476]: I0320 08:36:09.271198 7476 generic.go:334] "Generic (PLEG): container finished" podID="6a6a187d-5b25-4d63-939e-c04e07369371" containerID="8e319f4d734fd58b9a56147a1a2739f2e0bba0c55aaa97507d13bc7de8bfc3f1" exitCode=0 Mar 20 08:36:09.276495 master-0 kubenswrapper[7476]: I0320 08:36:09.271288 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" event={"ID":"6a6a187d-5b25-4d63-939e-c04e07369371","Type":"ContainerDied","Data":"8e319f4d734fd58b9a56147a1a2739f2e0bba0c55aaa97507d13bc7de8bfc3f1"} Mar 20 08:36:09.292371 master-0 kubenswrapper[7476]: I0320 08:36:09.292300 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" event={"ID":"9ce482dc-d0ac-40bc-9058-a1cfdc81575e","Type":"ContainerStarted","Data":"f73f25708579a25c6b06011558340a049bc18814ec77f148fd1c4ea077840f7e"} Mar 20 08:36:09.297098 master-0 kubenswrapper[7476]: I0320 08:36:09.294478 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:36:09.311962 master-0 kubenswrapper[7476]: I0320 08:36:09.311843 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:36:09.341293 master-0 kubenswrapper[7476]: I0320 08:36:09.333456 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" event={"ID":"59a2bbd7-3b08-45cf-9a7c-542effc09ec2","Type":"ContainerStarted","Data":"ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81"} Mar 20 08:36:09.341293 master-0 kubenswrapper[7476]: I0320 08:36:09.333600 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" podUID="59a2bbd7-3b08-45cf-9a7c-542effc09ec2" containerName="route-controller-manager" containerID="cri-o://ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81" gracePeriod=30 Mar 20 08:36:09.341293 master-0 kubenswrapper[7476]: I0320 08:36:09.334045 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:36:09.341755 master-0 kubenswrapper[7476]: I0320 08:36:09.341527 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:36:09.390795 master-0 kubenswrapper[7476]: I0320 08:36:09.390363 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerStarted","Data":"cc3c2a9c1f06758b9cf8e7a0bffe7eec7cabce777c5e4901ed4f712103ea4ff6"} Mar 20 08:36:09.390981 master-0 kubenswrapper[7476]: I0320 08:36:09.390958 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:36:09.394483 master-0 kubenswrapper[7476]: I0320 08:36:09.392316 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gskz6" event={"ID":"41253bde-5d09-4ff0-8e7c-4a21fe2b7106","Type":"ContainerStarted","Data":"9ea6c645be9e53fcf3d53f94ed4084999970b2edaa109f3c2638c7e834bf375d"} Mar 20 08:36:09.397611 master-0 kubenswrapper[7476]: I0320 08:36:09.396981 7476 patch_prober.go:28] interesting pod/marketplace-operator-89ccd998f-j84r8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" start-of-body= Mar 20 08:36:09.397611 master-0 kubenswrapper[7476]: I0320 08:36:09.397055 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" podUID="23003a2f-2053-47cc-8133-23eb886d4da0" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.12:8080/healthz\": dial tcp 10.128.0.12:8080: connect: connection refused" Mar 20 08:36:09.407103 master-0 kubenswrapper[7476]: I0320 08:36:09.404119 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"8da49bc1918ac91b4f777d9bf67f42c03551f09c69724ee02ff7ff48ea061fb1"} Mar 20 08:36:09.407103 master-0 kubenswrapper[7476]: I0320 08:36:09.404185 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"8c083804959a88c9c849b428e0b936db72af00ecf148631a285d481d8c54097f"} Mar 20 08:36:09.419032 master-0 kubenswrapper[7476]: I0320 08:36:09.417137 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" event={"ID":"0e79950f-50a5-46ec-b836-7a35dcce2851","Type":"ContainerStarted","Data":"b497bdf3019e13a087cc9efd50638831fd098ff627001f7158b4b8c8dfb030f6"} Mar 20 08:36:09.419032 master-0 kubenswrapper[7476]: I0320 08:36:09.417777 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:36:09.430980 master-0 kubenswrapper[7476]: I0320 08:36:09.430889 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"cce21ae1-63de-49be-a027-084a101e650b","Type":"ContainerStarted","Data":"ca41c67c83bd762137f7fd4b62a8f992e4f4eaa7271546ffae17c37b0db5004e"} Mar 20 08:36:09.465905 master-0 kubenswrapper[7476]: I0320 08:36:09.465835 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" event={"ID":"5707066a-bd66-41bc-8cea-cff1630ab5ee","Type":"ContainerStarted","Data":"c35a92d30debfb7629245f7755d88359cda5ae68ac4c29098c6ed3194958cb7d"} Mar 20 08:36:09.472708 master-0 kubenswrapper[7476]: I0320 08:36:09.472664 7476 generic.go:334] "Generic (PLEG): container finished" podID="ca56e37d-80ea-432b-a6d9-f4e904a40e10" containerID="3d7b06fc76103946132a85d04845bb83f54fb34b66bfd2a1c6aa9a2bee7fdecc" exitCode=0 Mar 20 08:36:09.473164 master-0 kubenswrapper[7476]: I0320 08:36:09.472729 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerDied","Data":"3d7b06fc76103946132a85d04845bb83f54fb34b66bfd2a1c6aa9a2bee7fdecc"} Mar 20 08:36:09.493536 master-0 kubenswrapper[7476]: I0320 08:36:09.493476 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"5c5ae9bfcc3ce85bdfe3cccc194f20c35db6cc7998e4967e566b59f8729c9691"} Mar 20 08:36:09.499651 master-0 kubenswrapper[7476]: I0320 08:36:09.499594 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nfrth" event={"ID":"00350ac7-b40a-4459-b94c-a37d7b613645","Type":"ContainerStarted","Data":"b177047353db36f3ff10d6a164d468e06e55f0b60bd6bd6dbb4908d3c99f4892"} Mar 20 08:36:09.512602 master-0 kubenswrapper[7476]: I0320 08:36:09.512555 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerStarted","Data":"46a769eaa885d6f2aee7986a052f5cb914f5503a0051214e8b4e113fe0f1651a"} Mar 20 08:36:09.514977 master-0 kubenswrapper[7476]: I0320 08:36:09.514915 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad","Type":"ContainerStarted","Data":"b0eff1df152bda1c9199f1965c6d884a1ca8857c9ac2c86f41d8e2066ebd225a"} Mar 20 08:36:09.516386 master-0 kubenswrapper[7476]: I0320 08:36:09.516364 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"88728a20ccc0653acaf97665b53dae69b14ad65649feac36dc7ea652a98e2296"} Mar 20 08:36:09.519850 master-0 kubenswrapper[7476]: I0320 08:36:09.519425 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" podStartSLOduration=9.772335771 podStartE2EDuration="26.519411783s" podCreationTimestamp="2026-03-20 08:35:43 +0000 UTC" firstStartedPulling="2026-03-20 08:35:51.358719631 +0000 UTC m=+32.327488147" lastFinishedPulling="2026-03-20 08:36:08.105795603 +0000 UTC m=+49.074564159" observedRunningTime="2026-03-20 08:36:09.517384011 +0000 UTC m=+50.486152547" watchObservedRunningTime="2026-03-20 08:36:09.519411783 +0000 UTC m=+50.488180309" Mar 20 08:36:09.524721 master-0 kubenswrapper[7476]: I0320 08:36:09.524694 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" event={"ID":"7ab32efc-7cc5-4e36-9c1c-05efb19914e2","Type":"ContainerStarted","Data":"0d8f00d5770f6ae7f6068bb266931b98fb82f37747584485f97ea270f43d2a15"} Mar 20 08:36:09.530577 master-0 kubenswrapper[7476]: I0320 08:36:09.525985 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:36:09.531640 master-0 kubenswrapper[7476]: I0320 08:36:09.531218 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" event={"ID":"3776fdb6-25a1-4e3d-bdd1-437c69af3a55","Type":"ContainerDied","Data":"aa0e338538aafee4fbc36907bfe4019e2f3a8c90665916ca155d6ae8d2916484"} Mar 20 08:36:09.531640 master-0 kubenswrapper[7476]: I0320 08:36:09.531315 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4" Mar 20 08:36:09.533225 master-0 kubenswrapper[7476]: I0320 08:36:09.533192 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:36:09.533279 master-0 kubenswrapper[7476]: I0320 08:36:09.533227 7476 scope.go:117] "RemoveContainer" containerID="cf5b5ccbf7753484bbfdedddf72e688b03e3f051328fd147e73fa02f3c9ead8d" Mar 20 08:36:09.537483 master-0 kubenswrapper[7476]: I0320 08:36:09.534049 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"3f797a39-3da6-49d7-8275-672dedbfb3cb","Type":"ContainerStarted","Data":"38d2ed2b0903d82b0cee6026db795c5066d5e62fc2a83e9dd29918947289fc1b"} Mar 20 08:36:09.553863 master-0 kubenswrapper[7476]: I0320 08:36:09.553674 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerName="controller-manager" containerID="cri-o://fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126" gracePeriod=30 Mar 20 08:36:09.554082 master-0 kubenswrapper[7476]: I0320 08:36:09.553425 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" event={"ID":"0636bf2d-3ba2-4f2b-9f8d-da11f4507985","Type":"ContainerStarted","Data":"fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126"} Mar 20 08:36:09.554082 master-0 kubenswrapper[7476]: I0320 08:36:09.554064 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:36:09.580637 master-0 kubenswrapper[7476]: I0320 08:36:09.580564 7476 patch_prober.go:28] interesting pod/controller-manager-5d9c65fcf4-x9fg6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: connect: connection reset by peer" start-of-body= Mar 20 08:36:09.580765 master-0 kubenswrapper[7476]: I0320 08:36:09.580712 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: connect: connection reset by peer" Mar 20 08:36:09.637339 master-0 kubenswrapper[7476]: I0320 08:36:09.637285 7476 patch_prober.go:28] interesting pod/controller-manager-5d9c65fcf4-x9fg6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: connect: connection refused" start-of-body= Mar 20 08:36:09.637539 master-0 kubenswrapper[7476]: I0320 08:36:09.637347 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.38:8443/healthz\": dial tcp 10.128.0.38:8443: connect: connection refused" Mar 20 08:36:09.727335 master-0 kubenswrapper[7476]: I0320 08:36:09.723973 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" podStartSLOduration=9.744623624 podStartE2EDuration="26.723942657s" podCreationTimestamp="2026-03-20 08:35:43 +0000 UTC" firstStartedPulling="2026-03-20 08:35:51.126253514 +0000 UTC m=+32.095022040" lastFinishedPulling="2026-03-20 08:36:08.105572517 +0000 UTC m=+49.074341073" observedRunningTime="2026-03-20 08:36:09.710366346 +0000 UTC m=+50.679134882" watchObservedRunningTime="2026-03-20 08:36:09.723942657 +0000 UTC m=+50.692711183" Mar 20 08:36:09.748362 master-0 kubenswrapper[7476]: I0320 08:36:09.748306 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4"] Mar 20 08:36:09.773447 master-0 kubenswrapper[7476]: I0320 08:36:09.773368 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-56d8475767-jtqd4"] Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: I0320 08:36:09.805329 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7d58488df-bzstx"] Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: E0320 08:36:09.805594 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" containerName="cluster-version-operator" Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: I0320 08:36:09.805608 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" containerName="cluster-version-operator" Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: I0320 08:36:09.805714 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" containerName="cluster-version-operator" Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: I0320 08:36:09.806177 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: I0320 08:36:09.809221 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 08:36:09.810589 master-0 kubenswrapper[7476]: I0320 08:36:09.809333 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 08:36:09.813654 master-0 kubenswrapper[7476]: I0320 08:36:09.813505 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 08:36:09.903214 master-0 kubenswrapper[7476]: I0320 08:36:09.902898 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wqxn7"] Mar 20 08:36:09.905774 master-0 kubenswrapper[7476]: I0320 08:36:09.905732 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:09.926372 master-0 kubenswrapper[7476]: I0320 08:36:09.923992 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca4cc7c-839d-4877-b0aa-c07607fea404-serving-cert\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:09.926372 master-0 kubenswrapper[7476]: I0320 08:36:09.924148 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:09.926372 master-0 kubenswrapper[7476]: I0320 08:36:09.924199 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:09.926372 master-0 kubenswrapper[7476]: I0320 08:36:09.924224 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bca4cc7c-839d-4877-b0aa-c07607fea404-service-ca\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:09.926372 master-0 kubenswrapper[7476]: I0320 08:36:09.924393 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bca4cc7c-839d-4877-b0aa-c07607fea404-kube-api-access\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:09.947809 master-0 kubenswrapper[7476]: I0320 08:36:09.947755 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqxn7"] Mar 20 08:36:09.949195 master-0 kubenswrapper[7476]: I0320 08:36:09.948627 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025193 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bca4cc7c-839d-4877-b0aa-c07607fea404-service-ca\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025352 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-utilities\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025375 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-catalog-content\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025394 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bca4cc7c-839d-4877-b0aa-c07607fea404-kube-api-access\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025429 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca4cc7c-839d-4877-b0aa-c07607fea404-serving-cert\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025825 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8cds\" (UniqueName: \"kubernetes.io/projected/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-kube-api-access-m8cds\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025920 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.025945 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.026067 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.026212 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.028756 master-0 kubenswrapper[7476]: I0320 08:36:10.026483 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bca4cc7c-839d-4877-b0aa-c07607fea404-service-ca\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.032764 master-0 kubenswrapper[7476]: I0320 08:36:10.032713 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca4cc7c-839d-4877-b0aa-c07607fea404-serving-cert\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.057316 master-0 kubenswrapper[7476]: I0320 08:36:10.055033 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:36:10.057316 master-0 kubenswrapper[7476]: I0320 08:36:10.055313 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bca4cc7c-839d-4877-b0aa-c07607fea404-kube-api-access\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128347 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-serving-cert\") pod \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128413 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-client-ca\") pod \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128582 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-config\") pod \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128624 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r687z\" (UniqueName: \"kubernetes.io/projected/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-kube-api-access-r687z\") pod \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\" (UID: \"59a2bbd7-3b08-45cf-9a7c-542effc09ec2\") " Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128829 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-utilities\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128857 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-catalog-content\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.129155 master-0 kubenswrapper[7476]: I0320 08:36:10.128909 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8cds\" (UniqueName: \"kubernetes.io/projected/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-kube-api-access-m8cds\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.130888 master-0 kubenswrapper[7476]: I0320 08:36:10.129834 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-utilities\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.132512 master-0 kubenswrapper[7476]: I0320 08:36:10.132460 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-catalog-content\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.133200 master-0 kubenswrapper[7476]: I0320 08:36:10.133148 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-client-ca" (OuterVolumeSpecName: "client-ca") pod "59a2bbd7-3b08-45cf-9a7c-542effc09ec2" (UID: "59a2bbd7-3b08-45cf-9a7c-542effc09ec2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:36:10.136679 master-0 kubenswrapper[7476]: I0320 08:36:10.136632 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-config" (OuterVolumeSpecName: "config") pod "59a2bbd7-3b08-45cf-9a7c-542effc09ec2" (UID: "59a2bbd7-3b08-45cf-9a7c-542effc09ec2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:36:10.144160 master-0 kubenswrapper[7476]: I0320 08:36:10.144100 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59a2bbd7-3b08-45cf-9a7c-542effc09ec2" (UID: "59a2bbd7-3b08-45cf-9a7c-542effc09ec2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:36:10.154684 master-0 kubenswrapper[7476]: I0320 08:36:10.154646 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-kube-api-access-r687z" (OuterVolumeSpecName: "kube-api-access-r687z") pod "59a2bbd7-3b08-45cf-9a7c-542effc09ec2" (UID: "59a2bbd7-3b08-45cf-9a7c-542effc09ec2"). InnerVolumeSpecName "kube-api-access-r687z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:10.160093 master-0 kubenswrapper[7476]: I0320 08:36:10.160039 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8cds\" (UniqueName: \"kubernetes.io/projected/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-kube-api-access-m8cds\") pod \"redhat-marketplace-wqxn7\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.224787 master-0 kubenswrapper[7476]: I0320 08:36:10.224744 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230053 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-serving-cert\") pod \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230110 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-config\") pod \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230136 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzt54\" (UniqueName: \"kubernetes.io/projected/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-kube-api-access-vzt54\") pod \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230210 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-proxy-ca-bundles\") pod \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230244 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-client-ca\") pod \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\" (UID: \"0636bf2d-3ba2-4f2b-9f8d-da11f4507985\") " Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230432 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r687z\" (UniqueName: \"kubernetes.io/projected/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-kube-api-access-r687z\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230444 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230454 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.230562 master-0 kubenswrapper[7476]: I0320 08:36:10.230463 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59a2bbd7-3b08-45cf-9a7c-542effc09ec2-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.235181 master-0 kubenswrapper[7476]: I0320 08:36:10.231076 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-client-ca" (OuterVolumeSpecName: "client-ca") pod "0636bf2d-3ba2-4f2b-9f8d-da11f4507985" (UID: "0636bf2d-3ba2-4f2b-9f8d-da11f4507985"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:36:10.235181 master-0 kubenswrapper[7476]: I0320 08:36:10.232782 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0636bf2d-3ba2-4f2b-9f8d-da11f4507985" (UID: "0636bf2d-3ba2-4f2b-9f8d-da11f4507985"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:36:10.235181 master-0 kubenswrapper[7476]: I0320 08:36:10.232978 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-config" (OuterVolumeSpecName: "config") pod "0636bf2d-3ba2-4f2b-9f8d-da11f4507985" (UID: "0636bf2d-3ba2-4f2b-9f8d-da11f4507985"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:36:10.238504 master-0 kubenswrapper[7476]: I0320 08:36:10.237440 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-kube-api-access-vzt54" (OuterVolumeSpecName: "kube-api-access-vzt54") pod "0636bf2d-3ba2-4f2b-9f8d-da11f4507985" (UID: "0636bf2d-3ba2-4f2b-9f8d-da11f4507985"). InnerVolumeSpecName "kube-api-access-vzt54". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:10.248453 master-0 kubenswrapper[7476]: I0320 08:36:10.248396 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0636bf2d-3ba2-4f2b-9f8d-da11f4507985" (UID: "0636bf2d-3ba2-4f2b-9f8d-da11f4507985"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:36:10.258758 master-0 kubenswrapper[7476]: W0320 08:36:10.258703 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbca4cc7c_839d_4877_b0aa_c07607fea404.slice/crio-1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc WatchSource:0}: Error finding container 1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc: Status 404 returned error can't find the container with id 1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc Mar 20 08:36:10.265778 master-0 kubenswrapper[7476]: I0320 08:36:10.265712 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:10.298475 master-0 kubenswrapper[7476]: I0320 08:36:10.290697 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 20 08:36:10.332151 master-0 kubenswrapper[7476]: I0320 08:36:10.332059 7476 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.332151 master-0 kubenswrapper[7476]: I0320 08:36:10.332095 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.332151 master-0 kubenswrapper[7476]: I0320 08:36:10.332105 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.332151 master-0 kubenswrapper[7476]: I0320 08:36:10.332115 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.332151 master-0 kubenswrapper[7476]: I0320 08:36:10.332125 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzt54\" (UniqueName: \"kubernetes.io/projected/0636bf2d-3ba2-4f2b-9f8d-da11f4507985-kube-api-access-vzt54\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:10.352788 master-0 kubenswrapper[7476]: I0320 08:36:10.352501 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xn4s4"] Mar 20 08:36:10.352906 master-0 kubenswrapper[7476]: E0320 08:36:10.352846 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59a2bbd7-3b08-45cf-9a7c-542effc09ec2" containerName="route-controller-manager" Mar 20 08:36:10.352906 master-0 kubenswrapper[7476]: I0320 08:36:10.352863 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="59a2bbd7-3b08-45cf-9a7c-542effc09ec2" containerName="route-controller-manager" Mar 20 08:36:10.352906 master-0 kubenswrapper[7476]: E0320 08:36:10.352887 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerName="controller-manager" Mar 20 08:36:10.352906 master-0 kubenswrapper[7476]: I0320 08:36:10.352895 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerName="controller-manager" Mar 20 08:36:10.353021 master-0 kubenswrapper[7476]: I0320 08:36:10.352987 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerName="controller-manager" Mar 20 08:36:10.353021 master-0 kubenswrapper[7476]: I0320 08:36:10.353001 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="59a2bbd7-3b08-45cf-9a7c-542effc09ec2" containerName="route-controller-manager" Mar 20 08:36:10.354420 master-0 kubenswrapper[7476]: I0320 08:36:10.353745 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.370466 master-0 kubenswrapper[7476]: I0320 08:36:10.370434 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xn4s4"] Mar 20 08:36:10.434788 master-0 kubenswrapper[7476]: I0320 08:36:10.432993 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wglt6\" (UniqueName: \"kubernetes.io/projected/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-kube-api-access-wglt6\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.434788 master-0 kubenswrapper[7476]: I0320 08:36:10.433085 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-catalog-content\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.434788 master-0 kubenswrapper[7476]: I0320 08:36:10.433109 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-utilities\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.533818 master-0 kubenswrapper[7476]: I0320 08:36:10.533762 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wglt6\" (UniqueName: \"kubernetes.io/projected/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-kube-api-access-wglt6\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.534006 master-0 kubenswrapper[7476]: I0320 08:36:10.533870 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-catalog-content\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.534006 master-0 kubenswrapper[7476]: I0320 08:36:10.533892 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-utilities\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.534382 master-0 kubenswrapper[7476]: I0320 08:36:10.534353 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-utilities\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.534980 master-0 kubenswrapper[7476]: I0320 08:36:10.534934 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-catalog-content\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.564810 master-0 kubenswrapper[7476]: I0320 08:36:10.564759 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"cce21ae1-63de-49be-a027-084a101e650b","Type":"ContainerStarted","Data":"08b76c47992e775acd809c6af275e2c7e9a0096419764ac5862de8d43565af46"} Mar 20 08:36:10.569200 master-0 kubenswrapper[7476]: I0320 08:36:10.569159 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"2c241c5cc1bda01a54d125786cac6f467e2e7cd45da3764b80c745165babdd10"} Mar 20 08:36:10.569297 master-0 kubenswrapper[7476]: I0320 08:36:10.569210 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"bfeca22e5c430d4bc0fffa7a152cc4559e40218ee50bb5357e4fb7fc605dfba3"} Mar 20 08:36:10.569297 master-0 kubenswrapper[7476]: I0320 08:36:10.569287 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:10.572483 master-0 kubenswrapper[7476]: I0320 08:36:10.571141 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nfrth" event={"ID":"00350ac7-b40a-4459-b94c-a37d7b613645","Type":"ContainerStarted","Data":"f2c2ef80b2d5381aae9d20f86a2fb3626f8d02e8194d288ba0a38ca637403d39"} Mar 20 08:36:10.573951 master-0 kubenswrapper[7476]: I0320 08:36:10.573916 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wglt6\" (UniqueName: \"kubernetes.io/projected/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-kube-api-access-wglt6\") pod \"redhat-operators-xn4s4\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.581668 master-0 kubenswrapper[7476]: I0320 08:36:10.581629 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"3f797a39-3da6-49d7-8275-672dedbfb3cb","Type":"ContainerStarted","Data":"1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9"} Mar 20 08:36:10.584250 master-0 kubenswrapper[7476]: I0320 08:36:10.584232 7476 generic.go:334] "Generic (PLEG): container finished" podID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" containerID="fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126" exitCode=0 Mar 20 08:36:10.584411 master-0 kubenswrapper[7476]: I0320 08:36:10.584369 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" Mar 20 08:36:10.584491 master-0 kubenswrapper[7476]: I0320 08:36:10.584390 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" event={"ID":"0636bf2d-3ba2-4f2b-9f8d-da11f4507985","Type":"ContainerDied","Data":"fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126"} Mar 20 08:36:10.584572 master-0 kubenswrapper[7476]: I0320 08:36:10.584560 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6" event={"ID":"0636bf2d-3ba2-4f2b-9f8d-da11f4507985","Type":"ContainerDied","Data":"b8cef6d632932f41894e048ea12376ac52c931e5410c8eae315d28796ea0835f"} Mar 20 08:36:10.584660 master-0 kubenswrapper[7476]: I0320 08:36:10.584648 7476 scope.go:117] "RemoveContainer" containerID="fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126" Mar 20 08:36:10.595868 master-0 kubenswrapper[7476]: I0320 08:36:10.595807 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=12.595791724 podStartE2EDuration="12.595791724s" podCreationTimestamp="2026-03-20 08:35:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:10.594699585 +0000 UTC m=+51.563468111" watchObservedRunningTime="2026-03-20 08:36:10.595791724 +0000 UTC m=+51.564560250" Mar 20 08:36:10.604481 master-0 kubenswrapper[7476]: I0320 08:36:10.599981 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" event={"ID":"bca4cc7c-839d-4877-b0aa-c07607fea404","Type":"ContainerStarted","Data":"31ba0046a64870a1c833f3e20714b8bf32a17da8c12ef6cc43c140fd13d24a10"} Mar 20 08:36:10.604481 master-0 kubenswrapper[7476]: I0320 08:36:10.600027 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" event={"ID":"bca4cc7c-839d-4877-b0aa-c07607fea404","Type":"ContainerStarted","Data":"1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc"} Mar 20 08:36:10.608280 master-0 kubenswrapper[7476]: I0320 08:36:10.608243 7476 scope.go:117] "RemoveContainer" containerID="fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126" Mar 20 08:36:10.609949 master-0 kubenswrapper[7476]: E0320 08:36:10.609930 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126\": container with ID starting with fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126 not found: ID does not exist" containerID="fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126" Mar 20 08:36:10.610059 master-0 kubenswrapper[7476]: I0320 08:36:10.610014 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126"} err="failed to get container status \"fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126\": rpc error: code = NotFound desc = could not find container \"fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126\": container with ID starting with fec043aebdf1f7518d69dc4f3dd5d093a74c49a727fa7e298ec99c94ad42c126 not found: ID does not exist" Mar 20 08:36:10.611098 master-0 kubenswrapper[7476]: I0320 08:36:10.611060 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerStarted","Data":"c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9"} Mar 20 08:36:10.618697 master-0 kubenswrapper[7476]: I0320 08:36:10.618654 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=8.618641336 podStartE2EDuration="8.618641336s" podCreationTimestamp="2026-03-20 08:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:10.618524943 +0000 UTC m=+51.587293469" watchObservedRunningTime="2026-03-20 08:36:10.618641336 +0000 UTC m=+51.587409862" Mar 20 08:36:10.619175 master-0 kubenswrapper[7476]: I0320 08:36:10.619139 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"bcc2923b1a498cf503f717e7c6dfa4d93b5d5620211265110c6112b306cbe70c"} Mar 20 08:36:10.619313 master-0 kubenswrapper[7476]: I0320 08:36:10.619184 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"853a1945138b3e0ff5252845780fd6a6c7275529314ebd23a219d848ce919728"} Mar 20 08:36:10.619824 master-0 kubenswrapper[7476]: I0320 08:36:10.619803 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:10.621304 master-0 kubenswrapper[7476]: I0320 08:36:10.621275 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gskz6" event={"ID":"41253bde-5d09-4ff0-8e7c-4a21fe2b7106","Type":"ContainerStarted","Data":"d9e3175f5786280bcf8e7ae3ed2dc3e7aba803ae5eb4d96e967e9d31611a12c9"} Mar 20 08:36:10.621680 master-0 kubenswrapper[7476]: I0320 08:36:10.621651 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gskz6" Mar 20 08:36:10.624273 master-0 kubenswrapper[7476]: I0320 08:36:10.624216 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad","Type":"ContainerStarted","Data":"2c88b29936632c7e1a12043219b0ccca076956b24225835db93815fb233d613d"} Mar 20 08:36:10.629141 master-0 kubenswrapper[7476]: I0320 08:36:10.629109 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerStarted","Data":"7392c45b64b53a9362843cd0cf092dd845b1e52691896714689fd92f01fce88d"} Mar 20 08:36:10.629141 master-0 kubenswrapper[7476]: I0320 08:36:10.629140 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerStarted","Data":"2797028adeb7afd0ff2813fc2ea1cae0a2f80e41616388fb1a6cfacf98dcfbac"} Mar 20 08:36:10.631696 master-0 kubenswrapper[7476]: I0320 08:36:10.631504 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" event={"ID":"6a6a187d-5b25-4d63-939e-c04e07369371","Type":"ContainerStarted","Data":"3f6725483078843883f69ca9b4dc6c600714be576ad371105f4cf43521ae8c0b"} Mar 20 08:36:10.633683 master-0 kubenswrapper[7476]: I0320 08:36:10.633653 7476 generic.go:334] "Generic (PLEG): container finished" podID="59a2bbd7-3b08-45cf-9a7c-542effc09ec2" containerID="ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81" exitCode=0 Mar 20 08:36:10.633959 master-0 kubenswrapper[7476]: I0320 08:36:10.633916 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" event={"ID":"59a2bbd7-3b08-45cf-9a7c-542effc09ec2","Type":"ContainerDied","Data":"ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81"} Mar 20 08:36:10.634006 master-0 kubenswrapper[7476]: I0320 08:36:10.633975 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" event={"ID":"59a2bbd7-3b08-45cf-9a7c-542effc09ec2","Type":"ContainerDied","Data":"896c3c181e0c584b66757229666574be4f2b845846f02e7d0e9eb6d3a1c7d8f2"} Mar 20 08:36:10.634039 master-0 kubenswrapper[7476]: I0320 08:36:10.634000 7476 scope.go:117] "RemoveContainer" containerID="ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81" Mar 20 08:36:10.634787 master-0 kubenswrapper[7476]: I0320 08:36:10.634763 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2" Mar 20 08:36:10.639631 master-0 kubenswrapper[7476]: I0320 08:36:10.639593 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:36:10.659315 master-0 kubenswrapper[7476]: I0320 08:36:10.659249 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" podStartSLOduration=8.659227506 podStartE2EDuration="8.659227506s" podCreationTimestamp="2026-03-20 08:36:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:10.645999254 +0000 UTC m=+51.614767800" watchObservedRunningTime="2026-03-20 08:36:10.659227506 +0000 UTC m=+51.627996022" Mar 20 08:36:10.665570 master-0 kubenswrapper[7476]: I0320 08:36:10.659878 7476 scope.go:117] "RemoveContainer" containerID="ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81" Mar 20 08:36:10.665570 master-0 kubenswrapper[7476]: E0320 08:36:10.660339 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81\": container with ID starting with ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81 not found: ID does not exist" containerID="ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81" Mar 20 08:36:10.665570 master-0 kubenswrapper[7476]: I0320 08:36:10.660396 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81"} err="failed to get container status \"ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81\": rpc error: code = NotFound desc = could not find container \"ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81\": container with ID starting with ba72d67349d2649c0830894ffcff67ac8b25148135759dc6d5501296bab4fd81 not found: ID does not exist" Mar 20 08:36:10.707362 master-0 kubenswrapper[7476]: I0320 08:36:10.707314 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:10.709768 master-0 kubenswrapper[7476]: I0320 08:36:10.709743 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2"] Mar 20 08:36:10.724567 master-0 kubenswrapper[7476]: I0320 08:36:10.721137 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b6dd4d5b8-87rv2"] Mar 20 08:36:10.740911 master-0 kubenswrapper[7476]: I0320 08:36:10.740226 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=10.740204942 podStartE2EDuration="10.740204942s" podCreationTimestamp="2026-03-20 08:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:10.73973111 +0000 UTC m=+51.708499636" watchObservedRunningTime="2026-03-20 08:36:10.740204942 +0000 UTC m=+51.708973468" Mar 20 08:36:10.775541 master-0 kubenswrapper[7476]: I0320 08:36:10.775436 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-gskz6" podStartSLOduration=3.916291759 podStartE2EDuration="18.775403653s" podCreationTimestamp="2026-03-20 08:35:52 +0000 UTC" firstStartedPulling="2026-03-20 08:35:53.324882113 +0000 UTC m=+34.293650639" lastFinishedPulling="2026-03-20 08:36:08.183993997 +0000 UTC m=+49.152762533" observedRunningTime="2026-03-20 08:36:10.771685016 +0000 UTC m=+51.740453542" watchObservedRunningTime="2026-03-20 08:36:10.775403653 +0000 UTC m=+51.744172179" Mar 20 08:36:10.811617 master-0 kubenswrapper[7476]: I0320 08:36:10.811562 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqxn7"] Mar 20 08:36:10.829476 master-0 kubenswrapper[7476]: W0320 08:36:10.829387 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podccb242ff_347a_4b02_8d9e_ba4dd62a5052.slice/crio-a8d37d40354efe71efcd0dcc6439359fea82c7340e68c0fdbc32fd28b903eadc WatchSource:0}: Error finding container a8d37d40354efe71efcd0dcc6439359fea82c7340e68c0fdbc32fd28b903eadc: Status 404 returned error can't find the container with id a8d37d40354efe71efcd0dcc6439359fea82c7340e68c0fdbc32fd28b903eadc Mar 20 08:36:10.844475 master-0 kubenswrapper[7476]: I0320 08:36:10.839708 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" podStartSLOduration=7.839688837 podStartE2EDuration="7.839688837s" podCreationTimestamp="2026-03-20 08:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:10.839328918 +0000 UTC m=+51.808097444" watchObservedRunningTime="2026-03-20 08:36:10.839688837 +0000 UTC m=+51.808457363" Mar 20 08:36:10.877538 master-0 kubenswrapper[7476]: I0320 08:36:10.868918 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" podStartSLOduration=4.202268532 podStartE2EDuration="20.868897753s" podCreationTimestamp="2026-03-20 08:35:50 +0000 UTC" firstStartedPulling="2026-03-20 08:35:51.438572967 +0000 UTC m=+32.407341493" lastFinishedPulling="2026-03-20 08:36:08.105202158 +0000 UTC m=+49.073970714" observedRunningTime="2026-03-20 08:36:10.868811731 +0000 UTC m=+51.837580267" watchObservedRunningTime="2026-03-20 08:36:10.868897753 +0000 UTC m=+51.837666279" Mar 20 08:36:10.943006 master-0 kubenswrapper[7476]: I0320 08:36:10.942926 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" podStartSLOduration=1.942906169 podStartE2EDuration="1.942906169s" podCreationTimestamp="2026-03-20 08:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:10.940050335 +0000 UTC m=+51.908818861" watchObservedRunningTime="2026-03-20 08:36:10.942906169 +0000 UTC m=+51.911674695" Mar 20 08:36:11.062486 master-0 kubenswrapper[7476]: I0320 08:36:11.061829 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" podStartSLOduration=14.919691059 podStartE2EDuration="32.061813186s" podCreationTimestamp="2026-03-20 08:35:39 +0000 UTC" firstStartedPulling="2026-03-20 08:35:51.038869902 +0000 UTC m=+32.007638438" lastFinishedPulling="2026-03-20 08:36:08.180992029 +0000 UTC m=+49.149760565" observedRunningTime="2026-03-20 08:36:11.058726907 +0000 UTC m=+52.027495433" watchObservedRunningTime="2026-03-20 08:36:11.061813186 +0000 UTC m=+52.030581722" Mar 20 08:36:11.089511 master-0 kubenswrapper[7476]: I0320 08:36:11.078226 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6"] Mar 20 08:36:11.089965 master-0 kubenswrapper[7476]: I0320 08:36:11.089936 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5d9c65fcf4-x9fg6"] Mar 20 08:36:11.100436 master-0 kubenswrapper[7476]: I0320 08:36:11.100413 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:36:11.100551 master-0 kubenswrapper[7476]: I0320 08:36:11.100541 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:36:11.133321 master-0 kubenswrapper[7476]: I0320 08:36:11.132709 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:36:11.238289 master-0 kubenswrapper[7476]: I0320 08:36:11.238223 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b46449cf-9fccc"] Mar 20 08:36:11.242273 master-0 kubenswrapper[7476]: I0320 08:36:11.238934 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.242273 master-0 kubenswrapper[7476]: I0320 08:36:11.242051 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:36:11.242273 master-0 kubenswrapper[7476]: I0320 08:36:11.242188 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:36:11.242706 master-0 kubenswrapper[7476]: I0320 08:36:11.242582 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:36:11.243048 master-0 kubenswrapper[7476]: I0320 08:36:11.242835 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:36:11.243048 master-0 kubenswrapper[7476]: I0320 08:36:11.242989 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:36:11.254381 master-0 kubenswrapper[7476]: I0320 08:36:11.251356 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:36:11.259739 master-0 kubenswrapper[7476]: I0320 08:36:11.259555 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0636bf2d-3ba2-4f2b-9f8d-da11f4507985" path="/var/lib/kubelet/pods/0636bf2d-3ba2-4f2b-9f8d-da11f4507985/volumes" Mar 20 08:36:11.261148 master-0 kubenswrapper[7476]: I0320 08:36:11.260148 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3776fdb6-25a1-4e3d-bdd1-437c69af3a55" path="/var/lib/kubelet/pods/3776fdb6-25a1-4e3d-bdd1-437c69af3a55/volumes" Mar 20 08:36:11.261148 master-0 kubenswrapper[7476]: I0320 08:36:11.260617 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59a2bbd7-3b08-45cf-9a7c-542effc09ec2" path="/var/lib/kubelet/pods/59a2bbd7-3b08-45cf-9a7c-542effc09ec2/volumes" Mar 20 08:36:11.261148 master-0 kubenswrapper[7476]: I0320 08:36:11.261036 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8488874649-cdk48"] Mar 20 08:36:11.267325 master-0 kubenswrapper[7476]: I0320 08:36:11.261688 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b46449cf-9fccc"] Mar 20 08:36:11.267325 master-0 kubenswrapper[7476]: I0320 08:36:11.261709 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8488874649-cdk48"] Mar 20 08:36:11.267325 master-0 kubenswrapper[7476]: I0320 08:36:11.261782 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.270181 master-0 kubenswrapper[7476]: I0320 08:36:11.269663 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:36:11.270181 master-0 kubenswrapper[7476]: I0320 08:36:11.269888 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:36:11.270181 master-0 kubenswrapper[7476]: I0320 08:36:11.270054 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:36:11.270362 master-0 kubenswrapper[7476]: I0320 08:36:11.270205 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:36:11.271526 master-0 kubenswrapper[7476]: I0320 08:36:11.271506 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:36:11.315140 master-0 kubenswrapper[7476]: I0320 08:36:11.314990 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xn4s4"] Mar 20 08:36:11.352107 master-0 kubenswrapper[7476]: I0320 08:36:11.352053 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-proxy-ca-bundles\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.352284 master-0 kubenswrapper[7476]: I0320 08:36:11.352195 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c200f016-3922-4e90-9061-92fd8c3fad2b-serving-cert\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.352318 master-0 kubenswrapper[7476]: I0320 08:36:11.352255 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-config\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.352409 master-0 kubenswrapper[7476]: I0320 08:36:11.352379 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4lpr\" (UniqueName: \"kubernetes.io/projected/f67db558-998e-48e3-9b55-b96029ec000c-kube-api-access-j4lpr\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.352442 master-0 kubenswrapper[7476]: I0320 08:36:11.352419 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f67db558-998e-48e3-9b55-b96029ec000c-serving-cert\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.352471 master-0 kubenswrapper[7476]: I0320 08:36:11.352455 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-client-ca\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.352559 master-0 kubenswrapper[7476]: I0320 08:36:11.352534 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnkjm\" (UniqueName: \"kubernetes.io/projected/c200f016-3922-4e90-9061-92fd8c3fad2b-kube-api-access-cnkjm\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.352592 master-0 kubenswrapper[7476]: I0320 08:36:11.352583 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-client-ca\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.352651 master-0 kubenswrapper[7476]: I0320 08:36:11.352631 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-config\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.456943 master-0 kubenswrapper[7476]: I0320 08:36:11.456910 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c200f016-3922-4e90-9061-92fd8c3fad2b-serving-cert\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.457419 master-0 kubenswrapper[7476]: I0320 08:36:11.457397 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-config\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.457476 master-0 kubenswrapper[7476]: I0320 08:36:11.457441 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4lpr\" (UniqueName: \"kubernetes.io/projected/f67db558-998e-48e3-9b55-b96029ec000c-kube-api-access-j4lpr\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.457476 master-0 kubenswrapper[7476]: I0320 08:36:11.457464 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f67db558-998e-48e3-9b55-b96029ec000c-serving-cert\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.457556 master-0 kubenswrapper[7476]: I0320 08:36:11.457484 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-client-ca\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.457556 master-0 kubenswrapper[7476]: I0320 08:36:11.457515 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnkjm\" (UniqueName: \"kubernetes.io/projected/c200f016-3922-4e90-9061-92fd8c3fad2b-kube-api-access-cnkjm\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.457556 master-0 kubenswrapper[7476]: I0320 08:36:11.457538 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-client-ca\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.457666 master-0 kubenswrapper[7476]: I0320 08:36:11.457564 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-config\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.457666 master-0 kubenswrapper[7476]: I0320 08:36:11.457621 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-proxy-ca-bundles\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.458954 master-0 kubenswrapper[7476]: I0320 08:36:11.458920 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-config\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.459068 master-0 kubenswrapper[7476]: I0320 08:36:11.459045 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-proxy-ca-bundles\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.459362 master-0 kubenswrapper[7476]: I0320 08:36:11.459328 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-client-ca\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.459684 master-0 kubenswrapper[7476]: I0320 08:36:11.459634 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-client-ca\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.459815 master-0 kubenswrapper[7476]: I0320 08:36:11.459787 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-config\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.462601 master-0 kubenswrapper[7476]: I0320 08:36:11.462579 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f67db558-998e-48e3-9b55-b96029ec000c-serving-cert\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.474992 master-0 kubenswrapper[7476]: I0320 08:36:11.474777 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c200f016-3922-4e90-9061-92fd8c3fad2b-serving-cert\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.476723 master-0 kubenswrapper[7476]: I0320 08:36:11.476695 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4lpr\" (UniqueName: \"kubernetes.io/projected/f67db558-998e-48e3-9b55-b96029ec000c-kube-api-access-j4lpr\") pod \"route-controller-manager-8488874649-cdk48\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.478441 master-0 kubenswrapper[7476]: I0320 08:36:11.478403 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnkjm\" (UniqueName: \"kubernetes.io/projected/c200f016-3922-4e90-9061-92fd8c3fad2b-kube-api-access-cnkjm\") pod \"controller-manager-65b46449cf-9fccc\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.536885 master-0 kubenswrapper[7476]: I0320 08:36:11.536671 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gk7zl"] Mar 20 08:36:11.540752 master-0 kubenswrapper[7476]: I0320 08:36:11.539165 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.554184 master-0 kubenswrapper[7476]: I0320 08:36:11.552907 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk7zl"] Mar 20 08:36:11.568698 master-0 kubenswrapper[7476]: I0320 08:36:11.568625 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:11.604433 master-0 kubenswrapper[7476]: I0320 08:36:11.597703 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:11.687294 master-0 kubenswrapper[7476]: I0320 08:36:11.680962 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-utilities\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.687294 master-0 kubenswrapper[7476]: I0320 08:36:11.681032 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-catalog-content\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.687294 master-0 kubenswrapper[7476]: I0320 08:36:11.681057 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m85d8\" (UniqueName: \"kubernetes.io/projected/d524ce06-8969-4b68-b236-9e11af55d854-kube-api-access-m85d8\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.687294 master-0 kubenswrapper[7476]: I0320 08:36:11.685578 7476 generic.go:334] "Generic (PLEG): container finished" podID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerID="de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870" exitCode=0 Mar 20 08:36:11.687294 master-0 kubenswrapper[7476]: I0320 08:36:11.685690 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerDied","Data":"de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870"} Mar 20 08:36:11.687294 master-0 kubenswrapper[7476]: I0320 08:36:11.685722 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerStarted","Data":"c446a250610f8c7824af123712e193acaf406cfdca4a6c66b51ec566b654cfbe"} Mar 20 08:36:11.711744 master-0 kubenswrapper[7476]: I0320 08:36:11.709956 7476 generic.go:334] "Generic (PLEG): container finished" podID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerID="b9956da416bbab1bdee494776bdb27eb3ac95a887e77cad24c8e769254d76bb0" exitCode=0 Mar 20 08:36:11.712814 master-0 kubenswrapper[7476]: I0320 08:36:11.712194 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqxn7" event={"ID":"ccb242ff-347a-4b02-8d9e-ba4dd62a5052","Type":"ContainerDied","Data":"b9956da416bbab1bdee494776bdb27eb3ac95a887e77cad24c8e769254d76bb0"} Mar 20 08:36:11.712814 master-0 kubenswrapper[7476]: I0320 08:36:11.712224 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqxn7" event={"ID":"ccb242ff-347a-4b02-8d9e-ba4dd62a5052","Type":"ContainerStarted","Data":"a8d37d40354efe71efcd0dcc6439359fea82c7340e68c0fdbc32fd28b903eadc"} Mar 20 08:36:11.718709 master-0 kubenswrapper[7476]: I0320 08:36:11.717592 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="3f797a39-3da6-49d7-8275-672dedbfb3cb" containerName="installer" containerID="cri-o://1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9" gracePeriod=30 Mar 20 08:36:11.726340 master-0 kubenswrapper[7476]: I0320 08:36:11.723748 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:36:11.786205 master-0 kubenswrapper[7476]: I0320 08:36:11.784880 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-utilities\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.786205 master-0 kubenswrapper[7476]: I0320 08:36:11.785486 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-catalog-content\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.786205 master-0 kubenswrapper[7476]: I0320 08:36:11.785526 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m85d8\" (UniqueName: \"kubernetes.io/projected/d524ce06-8969-4b68-b236-9e11af55d854-kube-api-access-m85d8\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.786205 master-0 kubenswrapper[7476]: I0320 08:36:11.786080 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-utilities\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.787134 master-0 kubenswrapper[7476]: I0320 08:36:11.786919 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-catalog-content\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.831087 master-0 kubenswrapper[7476]: I0320 08:36:11.831020 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m85d8\" (UniqueName: \"kubernetes.io/projected/d524ce06-8969-4b68-b236-9e11af55d854-kube-api-access-m85d8\") pod \"certified-operators-gk7zl\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.841837 master-0 kubenswrapper[7476]: I0320 08:36:11.841769 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d"] Mar 20 08:36:11.844306 master-0 kubenswrapper[7476]: I0320 08:36:11.844102 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:11.865360 master-0 kubenswrapper[7476]: I0320 08:36:11.861516 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 08:36:11.869609 master-0 kubenswrapper[7476]: I0320 08:36:11.869430 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d"] Mar 20 08:36:11.914099 master-0 kubenswrapper[7476]: I0320 08:36:11.913713 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:11.982936 master-0 kubenswrapper[7476]: I0320 08:36:11.982832 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b46449cf-9fccc"] Mar 20 08:36:11.988790 master-0 kubenswrapper[7476]: I0320 08:36:11.988750 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6d987-4b59-4fd9-889a-3250c12a726c-tmpfs\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:11.988872 master-0 kubenswrapper[7476]: I0320 08:36:11.988795 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-webhook-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:11.988872 master-0 kubenswrapper[7476]: I0320 08:36:11.988827 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v29ws\" (UniqueName: \"kubernetes.io/projected/0cb6d987-4b59-4fd9-889a-3250c12a726c-kube-api-access-v29ws\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:11.988872 master-0 kubenswrapper[7476]: I0320 08:36:11.988847 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-apiservice-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.090892 master-0 kubenswrapper[7476]: I0320 08:36:12.090277 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6d987-4b59-4fd9-889a-3250c12a726c-tmpfs\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.090892 master-0 kubenswrapper[7476]: I0320 08:36:12.090328 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-webhook-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.090892 master-0 kubenswrapper[7476]: I0320 08:36:12.090362 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29ws\" (UniqueName: \"kubernetes.io/projected/0cb6d987-4b59-4fd9-889a-3250c12a726c-kube-api-access-v29ws\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.090892 master-0 kubenswrapper[7476]: I0320 08:36:12.090383 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-apiservice-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.095559 master-0 kubenswrapper[7476]: I0320 08:36:12.091581 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6d987-4b59-4fd9-889a-3250c12a726c-tmpfs\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.105293 master-0 kubenswrapper[7476]: I0320 08:36:12.104076 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-apiservice-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.109290 master-0 kubenswrapper[7476]: I0320 08:36:12.108933 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29ws\" (UniqueName: \"kubernetes.io/projected/0cb6d987-4b59-4fd9-889a-3250c12a726c-kube-api-access-v29ws\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.128334 master-0 kubenswrapper[7476]: I0320 08:36:12.127644 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-webhook-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.158524 master-0 kubenswrapper[7476]: I0320 08:36:12.158479 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8488874649-cdk48"] Mar 20 08:36:12.226403 master-0 kubenswrapper[7476]: I0320 08:36:12.220733 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_3f797a39-3da6-49d7-8275-672dedbfb3cb/installer/0.log" Mar 20 08:36:12.226403 master-0 kubenswrapper[7476]: I0320 08:36:12.220795 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:12.245043 master-0 kubenswrapper[7476]: I0320 08:36:12.244987 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.374659 master-0 kubenswrapper[7476]: I0320 08:36:12.374545 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gk7zl"] Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.403511 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-var-lock\") pod \"3f797a39-3da6-49d7-8275-672dedbfb3cb\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.403560 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f797a39-3da6-49d7-8275-672dedbfb3cb-kube-api-access\") pod \"3f797a39-3da6-49d7-8275-672dedbfb3cb\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.403630 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-kubelet-dir\") pod \"3f797a39-3da6-49d7-8275-672dedbfb3cb\" (UID: \"3f797a39-3da6-49d7-8275-672dedbfb3cb\") " Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.403703 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3f797a39-3da6-49d7-8275-672dedbfb3cb" (UID: "3f797a39-3da6-49d7-8275-672dedbfb3cb"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.403734 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-var-lock" (OuterVolumeSpecName: "var-lock") pod "3f797a39-3da6-49d7-8275-672dedbfb3cb" (UID: "3f797a39-3da6-49d7-8275-672dedbfb3cb"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.404416 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:12.407379 master-0 kubenswrapper[7476]: I0320 08:36:12.404445 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3f797a39-3da6-49d7-8275-672dedbfb3cb-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:12.416711 master-0 kubenswrapper[7476]: I0320 08:36:12.416661 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f797a39-3da6-49d7-8275-672dedbfb3cb-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3f797a39-3da6-49d7-8275-672dedbfb3cb" (UID: "3f797a39-3da6-49d7-8275-672dedbfb3cb"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:12.486222 master-0 kubenswrapper[7476]: I0320 08:36:12.483514 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d"] Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: I0320 08:36:12.502200 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: E0320 08:36:12.502597 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f797a39-3da6-49d7-8275-672dedbfb3cb" containerName="installer" Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: I0320 08:36:12.502612 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f797a39-3da6-49d7-8275-672dedbfb3cb" containerName="installer" Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: I0320 08:36:12.502777 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f797a39-3da6-49d7-8275-672dedbfb3cb" containerName="installer" Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: I0320 08:36:12.503238 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: W0320 08:36:12.504076 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cb6d987_4b59_4fd9_889a_3250c12a726c.slice/crio-c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f WatchSource:0}: Error finding container c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f: Status 404 returned error can't find the container with id c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f Mar 20 08:36:12.506343 master-0 kubenswrapper[7476]: I0320 08:36:12.506316 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3f797a39-3da6-49d7-8275-672dedbfb3cb-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:12.527938 master-0 kubenswrapper[7476]: I0320 08:36:12.523976 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 20 08:36:12.607304 master-0 kubenswrapper[7476]: I0320 08:36:12.607233 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-var-lock\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.607369 master-0 kubenswrapper[7476]: I0320 08:36:12.607305 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kube-api-access\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.607369 master-0 kubenswrapper[7476]: I0320 08:36:12.607323 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.710371 master-0 kubenswrapper[7476]: I0320 08:36:12.710160 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-var-lock\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.710371 master-0 kubenswrapper[7476]: I0320 08:36:12.710248 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kube-api-access\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.711128 master-0 kubenswrapper[7476]: I0320 08:36:12.710432 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-var-lock\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.711128 master-0 kubenswrapper[7476]: I0320 08:36:12.710737 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.711341 master-0 kubenswrapper[7476]: I0320 08:36:12.711235 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.740756 master-0 kubenswrapper[7476]: I0320 08:36:12.740002 7476 generic.go:334] "Generic (PLEG): container finished" podID="d524ce06-8969-4b68-b236-9e11af55d854" containerID="2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165" exitCode=0 Mar 20 08:36:12.740756 master-0 kubenswrapper[7476]: I0320 08:36:12.740069 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerDied","Data":"2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165"} Mar 20 08:36:12.740756 master-0 kubenswrapper[7476]: I0320 08:36:12.740095 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerStarted","Data":"1985b1a6319773e5ed4a45821d2506fd38d71d7d03306a2e7817a73b2d10bb76"} Mar 20 08:36:12.740756 master-0 kubenswrapper[7476]: I0320 08:36:12.740385 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lkqww"] Mar 20 08:36:12.742790 master-0 kubenswrapper[7476]: I0320 08:36:12.742716 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:12.756340 master-0 kubenswrapper[7476]: I0320 08:36:12.747868 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kube-api-access\") pod \"installer-3-master-0\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.756340 master-0 kubenswrapper[7476]: I0320 08:36:12.751092 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerStarted","Data":"eacf5e052b386f63888a6a9a4f2ed8b8355f388306364efeef7926bdd5d16f5e"} Mar 20 08:36:12.756340 master-0 kubenswrapper[7476]: I0320 08:36:12.751137 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerStarted","Data":"e4fba2632a8ff841c8486ac8a6e820628bb0ebb1d21ae56e7fae136ec118d2c7"} Mar 20 08:36:12.756340 master-0 kubenswrapper[7476]: I0320 08:36:12.751154 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:12.758080 master-0 kubenswrapper[7476]: I0320 08:36:12.756240 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkqww"] Mar 20 08:36:12.758080 master-0 kubenswrapper[7476]: I0320 08:36:12.757743 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:36:12.765081 master-0 kubenswrapper[7476]: I0320 08:36:12.765011 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_3f797a39-3da6-49d7-8275-672dedbfb3cb/installer/0.log" Mar 20 08:36:12.765158 master-0 kubenswrapper[7476]: I0320 08:36:12.765081 7476 generic.go:334] "Generic (PLEG): container finished" podID="3f797a39-3da6-49d7-8275-672dedbfb3cb" containerID="1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9" exitCode=1 Mar 20 08:36:12.765247 master-0 kubenswrapper[7476]: I0320 08:36:12.765210 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"3f797a39-3da6-49d7-8275-672dedbfb3cb","Type":"ContainerDied","Data":"1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9"} Mar 20 08:36:12.765340 master-0 kubenswrapper[7476]: I0320 08:36:12.765283 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"3f797a39-3da6-49d7-8275-672dedbfb3cb","Type":"ContainerDied","Data":"38d2ed2b0903d82b0cee6026db795c5066d5e62fc2a83e9dd29918947289fc1b"} Mar 20 08:36:12.765340 master-0 kubenswrapper[7476]: I0320 08:36:12.765314 7476 scope.go:117] "RemoveContainer" containerID="1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9" Mar 20 08:36:12.765542 master-0 kubenswrapper[7476]: I0320 08:36:12.765512 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 20 08:36:12.788990 master-0 kubenswrapper[7476]: I0320 08:36:12.788946 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" event={"ID":"f67db558-998e-48e3-9b55-b96029ec000c","Type":"ContainerStarted","Data":"7e142c46726d66a9f4af952931f5f0ca34fe7b5fddc119c7c4f10f57df64fee8"} Mar 20 08:36:12.789065 master-0 kubenswrapper[7476]: I0320 08:36:12.788997 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" event={"ID":"f67db558-998e-48e3-9b55-b96029ec000c","Type":"ContainerStarted","Data":"9618eb1b1d712759bbe73dc554246ea95720ea5ad03e699ed75c6e4e3e82a275"} Mar 20 08:36:12.789467 master-0 kubenswrapper[7476]: I0320 08:36:12.789435 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:12.812689 master-0 kubenswrapper[7476]: I0320 08:36:12.812651 7476 scope.go:117] "RemoveContainer" containerID="1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9" Mar 20 08:36:12.814363 master-0 kubenswrapper[7476]: E0320 08:36:12.813313 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9\": container with ID starting with 1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9 not found: ID does not exist" containerID="1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9" Mar 20 08:36:12.814363 master-0 kubenswrapper[7476]: I0320 08:36:12.813351 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9"} err="failed to get container status \"1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9\": rpc error: code = NotFound desc = could not find container \"1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9\": container with ID starting with 1f529399ce65ec10992ec94a9166f5bbd3b0c8a2867fd0f3134cbfca100927a9 not found: ID does not exist" Mar 20 08:36:12.816985 master-0 kubenswrapper[7476]: I0320 08:36:12.816237 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" event={"ID":"0cb6d987-4b59-4fd9-889a-3250c12a726c","Type":"ContainerStarted","Data":"c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f"} Mar 20 08:36:12.818557 master-0 kubenswrapper[7476]: I0320 08:36:12.818405 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:12.818993 master-0 kubenswrapper[7476]: I0320 08:36:12.818952 7476 patch_prober.go:28] interesting pod/packageserver-6f5545c99f-6sl9d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:5443/healthz\": dial tcp 10.128.0.53:5443: connect: connection refused" start-of-body= Mar 20 08:36:12.819101 master-0 kubenswrapper[7476]: I0320 08:36:12.819070 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" podUID="0cb6d987-4b59-4fd9-889a-3250c12a726c" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.53:5443/healthz\": dial tcp 10.128.0.53:5443: connect: connection refused" Mar 20 08:36:12.869711 master-0 kubenswrapper[7476]: I0320 08:36:12.869651 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:36:12.902994 master-0 kubenswrapper[7476]: I0320 08:36:12.899548 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" podStartSLOduration=9.899528884 podStartE2EDuration="9.899528884s" podCreationTimestamp="2026-03-20 08:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:12.868781028 +0000 UTC m=+53.837549574" watchObservedRunningTime="2026-03-20 08:36:12.899528884 +0000 UTC m=+53.868297410" Mar 20 08:36:12.925431 master-0 kubenswrapper[7476]: I0320 08:36:12.916009 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-utilities\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:12.925431 master-0 kubenswrapper[7476]: I0320 08:36:12.916234 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghf64\" (UniqueName: \"kubernetes.io/projected/2b557b11-593d-4886-a9e3-ac4d18f901aa-kube-api-access-ghf64\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:12.925431 master-0 kubenswrapper[7476]: I0320 08:36:12.916290 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-catalog-content\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:12.935019 master-0 kubenswrapper[7476]: I0320 08:36:12.933096 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" podStartSLOduration=9.933074602 podStartE2EDuration="9.933074602s" podCreationTimestamp="2026-03-20 08:36:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:12.897151672 +0000 UTC m=+53.865920198" watchObservedRunningTime="2026-03-20 08:36:12.933074602 +0000 UTC m=+53.901843128" Mar 20 08:36:12.973428 master-0 kubenswrapper[7476]: I0320 08:36:12.970002 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" podStartSLOduration=1.969986087 podStartE2EDuration="1.969986087s" podCreationTimestamp="2026-03-20 08:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:12.934982592 +0000 UTC m=+53.903751138" watchObservedRunningTime="2026-03-20 08:36:12.969986087 +0000 UTC m=+53.938754603" Mar 20 08:36:12.978595 master-0 kubenswrapper[7476]: I0320 08:36:12.975078 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 20 08:36:12.979810 master-0 kubenswrapper[7476]: I0320 08:36:12.979756 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 20 08:36:13.021654 master-0 kubenswrapper[7476]: I0320 08:36:13.021083 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghf64\" (UniqueName: \"kubernetes.io/projected/2b557b11-593d-4886-a9e3-ac4d18f901aa-kube-api-access-ghf64\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.021654 master-0 kubenswrapper[7476]: I0320 08:36:13.021161 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-catalog-content\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.021654 master-0 kubenswrapper[7476]: I0320 08:36:13.021223 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-utilities\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.022582 master-0 kubenswrapper[7476]: I0320 08:36:13.022525 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-utilities\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.024350 master-0 kubenswrapper[7476]: I0320 08:36:13.023324 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:36:13.024350 master-0 kubenswrapper[7476]: I0320 08:36:13.024151 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-catalog-content\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.050037 master-0 kubenswrapper[7476]: I0320 08:36:13.049992 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghf64\" (UniqueName: \"kubernetes.io/projected/2b557b11-593d-4886-a9e3-ac4d18f901aa-kube-api-access-ghf64\") pod \"community-operators-lkqww\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.071056 master-0 kubenswrapper[7476]: I0320 08:36:13.070974 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:13.284748 master-0 kubenswrapper[7476]: I0320 08:36:13.284588 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f797a39-3da6-49d7-8275-672dedbfb3cb" path="/var/lib/kubelet/pods/3f797a39-3da6-49d7-8275-672dedbfb3cb/volumes" Mar 20 08:36:13.382516 master-0 kubenswrapper[7476]: I0320 08:36:13.382420 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lkqww"] Mar 20 08:36:13.389151 master-0 kubenswrapper[7476]: I0320 08:36:13.387168 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 20 08:36:13.397599 master-0 kubenswrapper[7476]: W0320 08:36:13.397525 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84b1b51a_cbfa_42de_9fb8_315e9cb76b58.slice/crio-84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec WatchSource:0}: Error finding container 84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec: Status 404 returned error can't find the container with id 84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec Mar 20 08:36:13.404806 master-0 kubenswrapper[7476]: W0320 08:36:13.403023 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b557b11_593d_4886_a9e3_ac4d18f901aa.slice/crio-3a9ea316413890b86eda70cb3bad759bee7fb3d758edade9e33048a017412911 WatchSource:0}: Error finding container 3a9ea316413890b86eda70cb3bad759bee7fb3d758edade9e33048a017412911: Status 404 returned error can't find the container with id 3a9ea316413890b86eda70cb3bad759bee7fb3d758edade9e33048a017412911 Mar 20 08:36:13.822691 master-0 kubenswrapper[7476]: I0320 08:36:13.822646 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerStarted","Data":"85a3b86ab4f57de1c35c94769c8b9923fb92d6e2e095f2cfa081de97d22d6a77"} Mar 20 08:36:13.822784 master-0 kubenswrapper[7476]: I0320 08:36:13.822697 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerStarted","Data":"3a9ea316413890b86eda70cb3bad759bee7fb3d758edade9e33048a017412911"} Mar 20 08:36:13.823573 master-0 kubenswrapper[7476]: I0320 08:36:13.823542 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"84b1b51a-cbfa-42de-9fb8-315e9cb76b58","Type":"ContainerStarted","Data":"84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec"} Mar 20 08:36:13.828296 master-0 kubenswrapper[7476]: I0320 08:36:13.827816 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" event={"ID":"0cb6d987-4b59-4fd9-889a-3250c12a726c","Type":"ContainerStarted","Data":"3da2656475f5983818e1475566996590539eef1a03ecaa67e3e41912939fad03"} Mar 20 08:36:14.505704 master-0 kubenswrapper[7476]: I0320 08:36:14.505639 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:36:14.506213 master-0 kubenswrapper[7476]: I0320 08:36:14.505723 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: I0320 08:36:14.703086 7476 patch_prober.go:28] interesting pod/apiserver-64b65cddf5-gx7h7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]log ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]etcd ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/max-in-flight-filter ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/project.openshift.io-projectcache ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/openshift.io-startinformers ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: livez check failed Mar 20 08:36:14.703584 master-0 kubenswrapper[7476]: I0320 08:36:14.703158 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" podUID="ca56e37d-80ea-432b-a6d9-f4e904a40e10" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:36:14.827868 master-0 kubenswrapper[7476]: I0320 08:36:14.827743 7476 patch_prober.go:28] interesting pod/packageserver-6f5545c99f-6sl9d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:36:14.827868 master-0 kubenswrapper[7476]: I0320 08:36:14.827852 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" podUID="0cb6d987-4b59-4fd9-889a-3250c12a726c" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.53:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:36:14.840548 master-0 kubenswrapper[7476]: I0320 08:36:14.840488 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"84b1b51a-cbfa-42de-9fb8-315e9cb76b58","Type":"ContainerStarted","Data":"9195f1dfc14cd53890895128ba6b2082162a13670d2ec403d7a28c0918592666"} Mar 20 08:36:14.843634 master-0 kubenswrapper[7476]: I0320 08:36:14.843599 7476 generic.go:334] "Generic (PLEG): container finished" podID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerID="85a3b86ab4f57de1c35c94769c8b9923fb92d6e2e095f2cfa081de97d22d6a77" exitCode=0 Mar 20 08:36:14.844522 master-0 kubenswrapper[7476]: I0320 08:36:14.844480 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerDied","Data":"85a3b86ab4f57de1c35c94769c8b9923fb92d6e2e095f2cfa081de97d22d6a77"} Mar 20 08:36:15.845055 master-0 kubenswrapper[7476]: I0320 08:36:15.845010 7476 patch_prober.go:28] interesting pod/packageserver-6f5545c99f-6sl9d container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:5443/healthz\": context deadline exceeded" start-of-body= Mar 20 08:36:15.845610 master-0 kubenswrapper[7476]: I0320 08:36:15.845066 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" podUID="0cb6d987-4b59-4fd9-889a-3250c12a726c" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.53:5443/healthz\": context deadline exceeded" Mar 20 08:36:16.154715 master-0 kubenswrapper[7476]: I0320 08:36:16.153465 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=4.153451699 podStartE2EDuration="4.153451699s" podCreationTimestamp="2026-03-20 08:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:36:16.153036867 +0000 UTC m=+57.121805393" watchObservedRunningTime="2026-03-20 08:36:16.153451699 +0000 UTC m=+57.122220225" Mar 20 08:36:16.542860 master-0 kubenswrapper[7476]: I0320 08:36:16.542808 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:36:17.329345 master-0 kubenswrapper[7476]: I0320 08:36:17.328552 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 20 08:36:17.329345 master-0 kubenswrapper[7476]: I0320 08:36:17.328897 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" containerName="installer" containerID="cri-o://2c88b29936632c7e1a12043219b0ccca076956b24225835db93815fb233d613d" gracePeriod=30 Mar 20 08:36:20.509478 master-0 kubenswrapper[7476]: I0320 08:36:20.508857 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gskz6" Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: I0320 08:36:21.501395 7476 patch_prober.go:28] interesting pod/apiserver-64b65cddf5-gx7h7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]log ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]etcd ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/max-in-flight-filter ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/project.openshift.io-projectcache ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/openshift.io-startinformers ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: livez check failed Mar 20 08:36:21.503301 master-0 kubenswrapper[7476]: I0320 08:36:21.501486 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" podUID="ca56e37d-80ea-432b-a6d9-f4e904a40e10" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:36:21.538302 master-0 kubenswrapper[7476]: I0320 08:36:21.534698 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 20 08:36:21.538302 master-0 kubenswrapper[7476]: I0320 08:36:21.535460 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.575359 master-0 kubenswrapper[7476]: I0320 08:36:21.573027 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 20 08:36:21.625297 master-0 kubenswrapper[7476]: I0320 08:36:21.622900 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.625297 master-0 kubenswrapper[7476]: I0320 08:36:21.622953 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea52b89-46f9-4685-aecd-162ba92baaf5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.625297 master-0 kubenswrapper[7476]: I0320 08:36:21.623006 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-var-lock\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.726384 master-0 kubenswrapper[7476]: I0320 08:36:21.726055 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-var-lock\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.726384 master-0 kubenswrapper[7476]: I0320 08:36:21.726152 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.726384 master-0 kubenswrapper[7476]: I0320 08:36:21.726186 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea52b89-46f9-4685-aecd-162ba92baaf5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.726663 master-0 kubenswrapper[7476]: I0320 08:36:21.726463 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-var-lock\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.726663 master-0 kubenswrapper[7476]: I0320 08:36:21.726513 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.759879 master-0 kubenswrapper[7476]: I0320 08:36:21.759718 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea52b89-46f9-4685-aecd-162ba92baaf5-kube-api-access\") pod \"installer-2-master-0\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:21.945400 master-0 kubenswrapper[7476]: I0320 08:36:21.945338 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:36:22.506479 master-0 kubenswrapper[7476]: I0320 08:36:22.505753 7476 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 20 08:36:22.506702 master-0 kubenswrapper[7476]: I0320 08:36:22.506615 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" containerID="cri-o://d7f4830141ed7d49d20e31769c038ca8340ad71b0bddea39298dca3d6416b345" gracePeriod=30 Mar 20 08:36:22.506774 master-0 kubenswrapper[7476]: I0320 08:36:22.506753 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" containerID="cri-o://bf19448fe2db422f2021f6a9801b4117923acb1b2003982f366081b4de585441" gracePeriod=30 Mar 20 08:36:22.509300 master-0 kubenswrapper[7476]: I0320 08:36:22.509232 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:36:22.509540 master-0 kubenswrapper[7476]: E0320 08:36:22.509511 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 20 08:36:22.509540 master-0 kubenswrapper[7476]: I0320 08:36:22.509535 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 20 08:36:22.509623 master-0 kubenswrapper[7476]: E0320 08:36:22.509564 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 20 08:36:22.509623 master-0 kubenswrapper[7476]: I0320 08:36:22.509573 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 20 08:36:22.509695 master-0 kubenswrapper[7476]: I0320 08:36:22.509664 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcdctl" Mar 20 08:36:22.509695 master-0 kubenswrapper[7476]: I0320 08:36:22.509684 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="d664a6d0d2a24360dee10612610f1b59" containerName="etcd" Mar 20 08:36:22.512955 master-0 kubenswrapper[7476]: I0320 08:36:22.512231 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.560866 master-0 kubenswrapper[7476]: I0320 08:36:22.560790 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.560866 master-0 kubenswrapper[7476]: I0320 08:36:22.560862 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.561438 master-0 kubenswrapper[7476]: I0320 08:36:22.560885 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.561438 master-0 kubenswrapper[7476]: I0320 08:36:22.560900 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.561438 master-0 kubenswrapper[7476]: I0320 08:36:22.560923 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.561438 master-0 kubenswrapper[7476]: I0320 08:36:22.560958 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662670 master-0 kubenswrapper[7476]: I0320 08:36:22.662412 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662670 master-0 kubenswrapper[7476]: I0320 08:36:22.662462 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662670 master-0 kubenswrapper[7476]: I0320 08:36:22.662568 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662681 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662725 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662787 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662794 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662815 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662835 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662889 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662909 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.662949 master-0 kubenswrapper[7476]: I0320 08:36:22.662944 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"etcd-master-0\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:36:22.994628 master-0 kubenswrapper[7476]: I0320 08:36:22.994578 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:36:23.505521 master-0 kubenswrapper[7476]: I0320 08:36:23.505458 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:36:24.510651 master-0 kubenswrapper[7476]: I0320 08:36:24.510605 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:36:24.514686 master-0 kubenswrapper[7476]: I0320 08:36:24.514652 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:36:26.000748 master-0 kubenswrapper[7476]: I0320 08:36:26.000698 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_c80fea3f-9ac4-4060-bb90-19f9de724299/installer/0.log" Mar 20 08:36:26.001228 master-0 kubenswrapper[7476]: I0320 08:36:26.000789 7476 generic.go:334] "Generic (PLEG): container finished" podID="c80fea3f-9ac4-4060-bb90-19f9de724299" containerID="17798884b9a2e50bed959f2a24ce2f0fe9b1568f4973ec8270b3b7b5e3bb3b54" exitCode=1 Mar 20 08:36:26.001228 master-0 kubenswrapper[7476]: I0320 08:36:26.000839 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"c80fea3f-9ac4-4060-bb90-19f9de724299","Type":"ContainerDied","Data":"17798884b9a2e50bed959f2a24ce2f0fe9b1568f4973ec8270b3b7b5e3bb3b54"} Mar 20 08:36:34.692298 master-0 kubenswrapper[7476]: I0320 08:36:34.692169 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_c80fea3f-9ac4-4060-bb90-19f9de724299/installer/0.log" Mar 20 08:36:34.693316 master-0 kubenswrapper[7476]: I0320 08:36:34.692330 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:36:34.865697 master-0 kubenswrapper[7476]: I0320 08:36:34.865609 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c80fea3f-9ac4-4060-bb90-19f9de724299-kube-api-access\") pod \"c80fea3f-9ac4-4060-bb90-19f9de724299\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " Mar 20 08:36:34.866016 master-0 kubenswrapper[7476]: I0320 08:36:34.865859 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-var-lock\") pod \"c80fea3f-9ac4-4060-bb90-19f9de724299\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " Mar 20 08:36:34.866016 master-0 kubenswrapper[7476]: I0320 08:36:34.865968 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-kubelet-dir\") pod \"c80fea3f-9ac4-4060-bb90-19f9de724299\" (UID: \"c80fea3f-9ac4-4060-bb90-19f9de724299\") " Mar 20 08:36:34.866415 master-0 kubenswrapper[7476]: I0320 08:36:34.866349 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "c80fea3f-9ac4-4060-bb90-19f9de724299" (UID: "c80fea3f-9ac4-4060-bb90-19f9de724299"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:34.866521 master-0 kubenswrapper[7476]: I0320 08:36:34.866433 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-var-lock" (OuterVolumeSpecName: "var-lock") pod "c80fea3f-9ac4-4060-bb90-19f9de724299" (UID: "c80fea3f-9ac4-4060-bb90-19f9de724299"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:34.871699 master-0 kubenswrapper[7476]: I0320 08:36:34.871640 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c80fea3f-9ac4-4060-bb90-19f9de724299-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c80fea3f-9ac4-4060-bb90-19f9de724299" (UID: "c80fea3f-9ac4-4060-bb90-19f9de724299"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:34.967900 master-0 kubenswrapper[7476]: I0320 08:36:34.967786 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:34.967900 master-0 kubenswrapper[7476]: I0320 08:36:34.967856 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c80fea3f-9ac4-4060-bb90-19f9de724299-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:34.967900 master-0 kubenswrapper[7476]: I0320 08:36:34.967873 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/c80fea3f-9ac4-4060-bb90-19f9de724299-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:35.051312 master-0 kubenswrapper[7476]: I0320 08:36:35.051230 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_c80fea3f-9ac4-4060-bb90-19f9de724299/installer/0.log" Mar 20 08:36:35.051625 master-0 kubenswrapper[7476]: I0320 08:36:35.051348 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"c80fea3f-9ac4-4060-bb90-19f9de724299","Type":"ContainerDied","Data":"3ea9cee0c994dbe6e138849417de7bc9a547b0875e9ae330a4c86ebfb3de2653"} Mar 20 08:36:35.051625 master-0 kubenswrapper[7476]: I0320 08:36:35.051413 7476 scope.go:117] "RemoveContainer" containerID="17798884b9a2e50bed959f2a24ce2f0fe9b1568f4973ec8270b3b7b5e3bb3b54" Mar 20 08:36:35.051625 master-0 kubenswrapper[7476]: I0320 08:36:35.051417 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 20 08:36:35.550662 master-0 kubenswrapper[7476]: E0320 08:36:35.550593 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 20 08:36:35.551145 master-0 kubenswrapper[7476]: I0320 08:36:35.551106 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 20 08:36:35.571492 master-0 kubenswrapper[7476]: W0320 08:36:35.571431 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24b4ed170d527099878cb5fdd508a2fb.slice/crio-c4aea7bfd79b0bd3a662b205c3f25d0a35586dbdfc013c5a727cda3643c9f9a7 WatchSource:0}: Error finding container c4aea7bfd79b0bd3a662b205c3f25d0a35586dbdfc013c5a727cda3643c9f9a7: Status 404 returned error can't find the container with id c4aea7bfd79b0bd3a662b205c3f25d0a35586dbdfc013c5a727cda3643c9f9a7 Mar 20 08:36:36.058304 master-0 kubenswrapper[7476]: I0320 08:36:36.058170 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"c4aea7bfd79b0bd3a662b205c3f25d0a35586dbdfc013c5a727cda3643c9f9a7"} Mar 20 08:36:37.070848 master-0 kubenswrapper[7476]: I0320 08:36:37.070771 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerStarted","Data":"a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be"} Mar 20 08:36:37.074283 master-0 kubenswrapper[7476]: I0320 08:36:37.074215 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerStarted","Data":"8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed"} Mar 20 08:36:37.077983 master-0 kubenswrapper[7476]: I0320 08:36:37.077868 7476 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="9ab09f622201f872e04a1e8a769261f4a46a4d60637dffa9e2a3458905508cd2" exitCode=1 Mar 20 08:36:37.078077 master-0 kubenswrapper[7476]: I0320 08:36:37.077983 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"9ab09f622201f872e04a1e8a769261f4a46a4d60637dffa9e2a3458905508cd2"} Mar 20 08:36:37.078937 master-0 kubenswrapper[7476]: I0320 08:36:37.078881 7476 scope.go:117] "RemoveContainer" containerID="9ab09f622201f872e04a1e8a769261f4a46a4d60637dffa9e2a3458905508cd2" Mar 20 08:36:38.089612 master-0 kubenswrapper[7476]: I0320 08:36:38.089451 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30"} Mar 20 08:36:38.092382 master-0 kubenswrapper[7476]: I0320 08:36:38.092259 7476 generic.go:334] "Generic (PLEG): container finished" podID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerID="c35a5738f2f9a6fb340b75e09b70d5c9961a967d646e1417a2634fd74ebeb167" exitCode=0 Mar 20 08:36:38.092525 master-0 kubenswrapper[7476]: I0320 08:36:38.092460 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"169353ee-c927-4483-8976-b9ca08b0a6d1","Type":"ContainerDied","Data":"c35a5738f2f9a6fb340b75e09b70d5c9961a967d646e1417a2634fd74ebeb167"} Mar 20 08:36:38.095286 master-0 kubenswrapper[7476]: I0320 08:36:38.095210 7476 generic.go:334] "Generic (PLEG): container finished" podID="d524ce06-8969-4b68-b236-9e11af55d854" containerID="8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed" exitCode=0 Mar 20 08:36:38.095428 master-0 kubenswrapper[7476]: I0320 08:36:38.095296 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerDied","Data":"8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed"} Mar 20 08:36:38.751294 master-0 kubenswrapper[7476]: I0320 08:36:38.751209 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:36:39.103164 master-0 kubenswrapper[7476]: I0320 08:36:39.103108 7476 generic.go:334] "Generic (PLEG): container finished" podID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerID="a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be" exitCode=0 Mar 20 08:36:39.104091 master-0 kubenswrapper[7476]: I0320 08:36:39.103182 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerDied","Data":"a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be"} Mar 20 08:36:39.106095 master-0 kubenswrapper[7476]: I0320 08:36:39.106042 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30" exitCode=0 Mar 20 08:36:39.106356 master-0 kubenswrapper[7476]: I0320 08:36:39.106120 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30"} Mar 20 08:36:39.760190 master-0 kubenswrapper[7476]: I0320 08:36:39.760037 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:36:40.115983 master-0 kubenswrapper[7476]: I0320 08:36:40.115922 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"cecb02a386370afc24432a3402093f12ea9e53ed8df8b02259c918fbec5ca271"} Mar 20 08:36:41.860559 master-0 kubenswrapper[7476]: E0320 08:36:41.860416 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:36:41.950726 master-0 kubenswrapper[7476]: I0320 08:36:41.950628 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:36:42.285954 master-0 kubenswrapper[7476]: I0320 08:36:42.285776 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:36:44.155868 master-0 kubenswrapper[7476]: I0320 08:36:44.155789 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad/installer/0.log" Mar 20 08:36:44.156773 master-0 kubenswrapper[7476]: I0320 08:36:44.155879 7476 generic.go:334] "Generic (PLEG): container finished" podID="5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" containerID="2c88b29936632c7e1a12043219b0ccca076956b24225835db93815fb233d613d" exitCode=1 Mar 20 08:36:44.156773 master-0 kubenswrapper[7476]: I0320 08:36:44.155933 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad","Type":"ContainerDied","Data":"2c88b29936632c7e1a12043219b0ccca076956b24225835db93815fb233d613d"} Mar 20 08:36:44.952066 master-0 kubenswrapper[7476]: I0320 08:36:44.951885 7476 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:36:45.951875 master-0 kubenswrapper[7476]: I0320 08:36:45.951831 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 20 08:36:45.958454 master-0 kubenswrapper[7476]: I0320 08:36:45.958417 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad/installer/0.log" Mar 20 08:36:45.958736 master-0 kubenswrapper[7476]: I0320 08:36:45.958710 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:45.976088 master-0 kubenswrapper[7476]: I0320 08:36:45.976046 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kubelet-dir\") pod \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " Mar 20 08:36:45.976216 master-0 kubenswrapper[7476]: I0320 08:36:45.976095 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-kubelet-dir\") pod \"169353ee-c927-4483-8976-b9ca08b0a6d1\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " Mar 20 08:36:45.976216 master-0 kubenswrapper[7476]: I0320 08:36:45.976154 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/169353ee-c927-4483-8976-b9ca08b0a6d1-kube-api-access\") pod \"169353ee-c927-4483-8976-b9ca08b0a6d1\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " Mar 20 08:36:45.976216 master-0 kubenswrapper[7476]: I0320 08:36:45.976203 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-var-lock\") pod \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " Mar 20 08:36:45.976493 master-0 kubenswrapper[7476]: I0320 08:36:45.976204 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" (UID: "5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:45.976493 master-0 kubenswrapper[7476]: I0320 08:36:45.976247 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-var-lock\") pod \"169353ee-c927-4483-8976-b9ca08b0a6d1\" (UID: \"169353ee-c927-4483-8976-b9ca08b0a6d1\") " Mar 20 08:36:45.976493 master-0 kubenswrapper[7476]: I0320 08:36:45.976313 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-var-lock" (OuterVolumeSpecName: "var-lock") pod "5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" (UID: "5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:45.976493 master-0 kubenswrapper[7476]: I0320 08:36:45.976382 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kube-api-access\") pod \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\" (UID: \"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad\") " Mar 20 08:36:45.976493 master-0 kubenswrapper[7476]: I0320 08:36:45.976465 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-var-lock" (OuterVolumeSpecName: "var-lock") pod "169353ee-c927-4483-8976-b9ca08b0a6d1" (UID: "169353ee-c927-4483-8976-b9ca08b0a6d1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:45.976877 master-0 kubenswrapper[7476]: I0320 08:36:45.976815 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:45.976877 master-0 kubenswrapper[7476]: I0320 08:36:45.976871 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:45.977189 master-0 kubenswrapper[7476]: I0320 08:36:45.976892 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:45.977189 master-0 kubenswrapper[7476]: I0320 08:36:45.976346 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "169353ee-c927-4483-8976-b9ca08b0a6d1" (UID: "169353ee-c927-4483-8976-b9ca08b0a6d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:45.981855 master-0 kubenswrapper[7476]: I0320 08:36:45.981778 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169353ee-c927-4483-8976-b9ca08b0a6d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "169353ee-c927-4483-8976-b9ca08b0a6d1" (UID: "169353ee-c927-4483-8976-b9ca08b0a6d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:45.981982 master-0 kubenswrapper[7476]: I0320 08:36:45.981869 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" (UID: "5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:46.078020 master-0 kubenswrapper[7476]: I0320 08:36:46.077950 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/169353ee-c927-4483-8976-b9ca08b0a6d1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:46.078020 master-0 kubenswrapper[7476]: I0320 08:36:46.077999 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/169353ee-c927-4483-8976-b9ca08b0a6d1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:46.078020 master-0 kubenswrapper[7476]: I0320 08:36:46.078020 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:46.174902 master-0 kubenswrapper[7476]: I0320 08:36:46.174709 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad/installer/0.log" Mar 20 08:36:46.174902 master-0 kubenswrapper[7476]: I0320 08:36:46.174797 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad","Type":"ContainerDied","Data":"b0eff1df152bda1c9199f1965c6d884a1ca8857c9ac2c86f41d8e2066ebd225a"} Mar 20 08:36:46.174902 master-0 kubenswrapper[7476]: I0320 08:36:46.174842 7476 scope.go:117] "RemoveContainer" containerID="2c88b29936632c7e1a12043219b0ccca076956b24225835db93815fb233d613d" Mar 20 08:36:46.174902 master-0 kubenswrapper[7476]: I0320 08:36:46.174886 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 20 08:36:46.177721 master-0 kubenswrapper[7476]: I0320 08:36:46.177680 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/0.log" Mar 20 08:36:46.177721 master-0 kubenswrapper[7476]: I0320 08:36:46.177716 7476 generic.go:334] "Generic (PLEG): container finished" podID="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" containerID="254f8acc157dece685517f93e40a5d981d3cb093e1a077345ec886e180445eaa" exitCode=1 Mar 20 08:36:46.177887 master-0 kubenswrapper[7476]: I0320 08:36:46.177767 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerDied","Data":"254f8acc157dece685517f93e40a5d981d3cb093e1a077345ec886e180445eaa"} Mar 20 08:36:46.178700 master-0 kubenswrapper[7476]: I0320 08:36:46.178181 7476 scope.go:117] "RemoveContainer" containerID="254f8acc157dece685517f93e40a5d981d3cb093e1a077345ec886e180445eaa" Mar 20 08:36:46.180619 master-0 kubenswrapper[7476]: I0320 08:36:46.180535 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 20 08:36:46.180619 master-0 kubenswrapper[7476]: I0320 08:36:46.180556 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"169353ee-c927-4483-8976-b9ca08b0a6d1","Type":"ContainerDied","Data":"eb298360c7626b678f9c8cf233db291ec09731cb94cf6c1ae69432ca7d42b080"} Mar 20 08:36:46.182153 master-0 kubenswrapper[7476]: I0320 08:36:46.180721 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb298360c7626b678f9c8cf233db291ec09731cb94cf6c1ae69432ca7d42b080" Mar 20 08:36:46.183416 master-0 kubenswrapper[7476]: I0320 08:36:46.182906 7476 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="f3cf6c6c759bc79e0c49a7c2679b7d5ff1593a53a6783b3355ac6464233ad33d" exitCode=1 Mar 20 08:36:46.183416 master-0 kubenswrapper[7476]: I0320 08:36:46.182969 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"f3cf6c6c759bc79e0c49a7c2679b7d5ff1593a53a6783b3355ac6464233ad33d"} Mar 20 08:36:46.183805 master-0 kubenswrapper[7476]: I0320 08:36:46.183634 7476 scope.go:117] "RemoveContainer" containerID="f3cf6c6c759bc79e0c49a7c2679b7d5ff1593a53a6783b3355ac6464233ad33d" Mar 20 08:36:47.190416 master-0 kubenswrapper[7476]: I0320 08:36:47.190329 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/0.log" Mar 20 08:36:47.191186 master-0 kubenswrapper[7476]: I0320 08:36:47.190439 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerStarted","Data":"574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af"} Mar 20 08:36:47.193492 master-0 kubenswrapper[7476]: I0320 08:36:47.193407 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerStarted","Data":"05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566"} Mar 20 08:36:47.195398 master-0 kubenswrapper[7476]: I0320 08:36:47.195354 7476 generic.go:334] "Generic (PLEG): container finished" podID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerID="fddad6fba182f96b236344babb403bec4283b752ab6cd93abdc1905a34daa41f" exitCode=0 Mar 20 08:36:47.195398 master-0 kubenswrapper[7476]: I0320 08:36:47.195400 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqxn7" event={"ID":"ccb242ff-347a-4b02-8d9e-ba4dd62a5052","Type":"ContainerDied","Data":"fddad6fba182f96b236344babb403bec4283b752ab6cd93abdc1905a34daa41f"} Mar 20 08:36:47.197625 master-0 kubenswrapper[7476]: I0320 08:36:47.197576 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerStarted","Data":"d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072"} Mar 20 08:36:47.199771 master-0 kubenswrapper[7476]: I0320 08:36:47.199729 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"2af5fddc5d2a375dc416488e9df9292dbf88621bcffa837acf0f758641cfece0"} Mar 20 08:36:47.202333 master-0 kubenswrapper[7476]: I0320 08:36:47.202297 7476 generic.go:334] "Generic (PLEG): container finished" podID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerID="ee2a491741aaab17c9397840a0a9333d49e8c0024dc7d2841f844485424c0ff4" exitCode=0 Mar 20 08:36:47.202799 master-0 kubenswrapper[7476]: I0320 08:36:47.202413 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerDied","Data":"ee2a491741aaab17c9397840a0a9333d49e8c0024dc7d2841f844485424c0ff4"} Mar 20 08:36:48.213194 master-0 kubenswrapper[7476]: I0320 08:36:48.213141 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerStarted","Data":"eb8676cf9ada7cfd2023b1d3e8aade0f49b7cd5e26fda65bc3000bbdf4f17e73"} Mar 20 08:36:48.215190 master-0 kubenswrapper[7476]: I0320 08:36:48.215155 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqxn7" event={"ID":"ccb242ff-347a-4b02-8d9e-ba4dd62a5052","Type":"ContainerStarted","Data":"533ca83f2f1cbe90843aea19e67a25f7a7f9cb27edbc66c29caae3aa94a291f5"} Mar 20 08:36:48.751161 master-0 kubenswrapper[7476]: I0320 08:36:48.751099 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:36:50.265895 master-0 kubenswrapper[7476]: I0320 08:36:50.265835 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:50.266621 master-0 kubenswrapper[7476]: I0320 08:36:50.266216 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:36:50.708538 master-0 kubenswrapper[7476]: I0320 08:36:50.708453 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:50.708538 master-0 kubenswrapper[7476]: I0320 08:36:50.708540 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:36:51.162573 master-0 kubenswrapper[7476]: E0320 08:36:51.162328 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:36:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:36:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:36:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:36:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951ecfeba9b2da4b653034d09275f925396a79c2d8461b8a7c71c776fee67ba0\\\"],\\\"sizeBytes\\\":443272037},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:292560e2d80b460468bb19fe0ddf289767c655027b03a76ee6c40c91ffe4c483\\\"],\\\"sizeBytes\\\":438654374},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0e66fd50be6f83ce321a566dfb76f3725b597374077d5af13813b928f6b1267e\\\"],\\\"sizeBytes\\\":411587146},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e3a494212f1ba17f0f0980eef583218330eccb56eadf6b8cb0548c76d99b5014\\\"],\\\"sizeBytes\\\":407347125},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422\\\"],\\\"sizeBytes\\\":396521761}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:36:51.237165 master-0 kubenswrapper[7476]: I0320 08:36:51.237067 7476 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="bf19448fe2db422f2021f6a9801b4117923acb1b2003982f366081b4de585441" exitCode=0 Mar 20 08:36:51.311814 master-0 kubenswrapper[7476]: I0320 08:36:51.311711 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-wqxn7" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="registry-server" probeResult="failure" output=< Mar 20 08:36:51.311814 master-0 kubenswrapper[7476]: timeout: failed to connect service ":50051" within 1s Mar 20 08:36:51.311814 master-0 kubenswrapper[7476]: > Mar 20 08:36:51.772287 master-0 kubenswrapper[7476]: I0320 08:36:51.772133 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xn4s4" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="registry-server" probeResult="failure" output=< Mar 20 08:36:51.772287 master-0 kubenswrapper[7476]: timeout: failed to connect service ":50051" within 1s Mar 20 08:36:51.772287 master-0 kubenswrapper[7476]: > Mar 20 08:36:51.861189 master-0 kubenswrapper[7476]: E0320 08:36:51.861063 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:36:51.914729 master-0 kubenswrapper[7476]: I0320 08:36:51.914635 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:51.915011 master-0 kubenswrapper[7476]: I0320 08:36:51.914911 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:51.970232 master-0 kubenswrapper[7476]: I0320 08:36:51.970163 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:52.113246 master-0 kubenswrapper[7476]: E0320 08:36:52.113189 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 20 08:36:52.285506 master-0 kubenswrapper[7476]: I0320 08:36:52.284979 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:36:52.635245 master-0 kubenswrapper[7476]: I0320 08:36:52.635193 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 20 08:36:52.635953 master-0 kubenswrapper[7476]: I0320 08:36:52.635287 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:36:52.765292 master-0 kubenswrapper[7476]: I0320 08:36:52.764922 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 20 08:36:52.765292 master-0 kubenswrapper[7476]: I0320 08:36:52.765123 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") pod \"d664a6d0d2a24360dee10612610f1b59\" (UID: \"d664a6d0d2a24360dee10612610f1b59\") " Mar 20 08:36:52.765696 master-0 kubenswrapper[7476]: I0320 08:36:52.765121 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs" (OuterVolumeSpecName: "certs") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:52.765696 master-0 kubenswrapper[7476]: I0320 08:36:52.765116 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir" (OuterVolumeSpecName: "data-dir") pod "d664a6d0d2a24360dee10612610f1b59" (UID: "d664a6d0d2a24360dee10612610f1b59"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:36:52.765943 master-0 kubenswrapper[7476]: I0320 08:36:52.765768 7476 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:52.765943 master-0 kubenswrapper[7476]: I0320 08:36:52.765796 7476 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/d664a6d0d2a24360dee10612610f1b59-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:36:53.072183 master-0 kubenswrapper[7476]: I0320 08:36:53.072007 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:53.072183 master-0 kubenswrapper[7476]: I0320 08:36:53.072116 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:53.113809 master-0 kubenswrapper[7476]: I0320 08:36:53.113707 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:36:53.255554 master-0 kubenswrapper[7476]: I0320 08:36:53.255482 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d664a6d0d2a24360dee10612610f1b59" path="/var/lib/kubelet/pods/d664a6d0d2a24360dee10612610f1b59/volumes" Mar 20 08:36:53.256123 master-0 kubenswrapper[7476]: I0320 08:36:53.256083 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 20 08:36:53.258360 master-0 kubenswrapper[7476]: I0320 08:36:53.258313 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f" exitCode=0 Mar 20 08:36:53.263673 master-0 kubenswrapper[7476]: I0320 08:36:53.263618 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_d664a6d0d2a24360dee10612610f1b59/etcdctl/0.log" Mar 20 08:36:53.263869 master-0 kubenswrapper[7476]: I0320 08:36:53.263799 7476 generic.go:334] "Generic (PLEG): container finished" podID="d664a6d0d2a24360dee10612610f1b59" containerID="d7f4830141ed7d49d20e31769c038ca8340ad71b0bddea39298dca3d6416b345" exitCode=137 Mar 20 08:36:53.264390 master-0 kubenswrapper[7476]: I0320 08:36:53.264021 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:36:54.270585 master-0 kubenswrapper[7476]: I0320 08:36:54.270380 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_cce21ae1-63de-49be-a027-084a101e650b/installer/0.log" Mar 20 08:36:54.270585 master-0 kubenswrapper[7476]: I0320 08:36:54.270436 7476 generic.go:334] "Generic (PLEG): container finished" podID="cce21ae1-63de-49be-a027-084a101e650b" containerID="08b76c47992e775acd809c6af275e2c7e9a0096419764ac5862de8d43565af46" exitCode=1 Mar 20 08:36:54.950756 master-0 kubenswrapper[7476]: I0320 08:36:54.950623 7476 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:36:56.529221 master-0 kubenswrapper[7476]: E0320 08:36:56.529043 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189e7fc3e15b05e3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:d664a6d0d2a24360dee10612610f1b59,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:36:22.506743267 +0000 UTC m=+63.475511793,LastTimestamp:2026-03-20 08:36:22.506743267 +0000 UTC m=+63.475511793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:37:01.163303 master-0 kubenswrapper[7476]: E0320 08:37:01.163155 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:01.863109 master-0 kubenswrapper[7476]: E0320 08:37:01.863017 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:02.183996 master-0 kubenswrapper[7476]: I0320 08:37:02.183766 7476 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-tdpfq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 20 08:37:02.183996 master-0 kubenswrapper[7476]: I0320 08:37:02.183880 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" podUID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 20 08:37:03.330552 master-0 kubenswrapper[7476]: I0320 08:37:03.330466 7476 generic.go:334] "Generic (PLEG): container finished" podID="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" containerID="38ba09231d63afd93a0205a5845a80e4d47fa8290768d886cf1c7ea448f682d8" exitCode=0 Mar 20 08:37:04.975705 master-0 kubenswrapper[7476]: I0320 08:37:04.975581 7476 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:06.265738 master-0 kubenswrapper[7476]: E0320 08:37:06.265608 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 20 08:37:08.364322 master-0 kubenswrapper[7476]: I0320 08:37:08.364224 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1" exitCode=0 Mar 20 08:37:10.379853 master-0 kubenswrapper[7476]: I0320 08:37:10.379806 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-x4w25_9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/network-operator/0.log" Mar 20 08:37:10.380380 master-0 kubenswrapper[7476]: I0320 08:37:10.379860 7476 generic.go:334] "Generic (PLEG): container finished" podID="9817d1ec-3d7c-49fb-8e41-26f5727ef9e8" containerID="51d5ff19316ba50d65a137d07edaf8d44d3c66d7ea87669b610c77e6e7a5026d" exitCode=255 Mar 20 08:37:11.164657 master-0 kubenswrapper[7476]: E0320 08:37:11.164546 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:11.864292 master-0 kubenswrapper[7476]: E0320 08:37:11.864155 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:12.183613 master-0 kubenswrapper[7476]: I0320 08:37:12.183416 7476 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-tdpfq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 20 08:37:12.183613 master-0 kubenswrapper[7476]: I0320 08:37:12.183519 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" podUID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 20 08:37:14.408611 master-0 kubenswrapper[7476]: I0320 08:37:14.408511 7476 generic.go:334] "Generic (PLEG): container finished" podID="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" containerID="ae08cd7d4b99291a81168cf2f99395c5e971d107dc0502f7bea648e012bdeade" exitCode=0 Mar 20 08:37:15.418611 master-0 kubenswrapper[7476]: I0320 08:37:15.418537 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/0.log" Mar 20 08:37:15.419688 master-0 kubenswrapper[7476]: I0320 08:37:15.419193 7476 generic.go:334] "Generic (PLEG): container finished" podID="9d653bfa-7168-49fa-a838-aedb33c7e60f" containerID="c5f00c0d77211fa7340df0b5c9e4c67e0a0eeb68e81ac9de5effbf2d875c406e" exitCode=1 Mar 20 08:37:17.437804 master-0 kubenswrapper[7476]: I0320 08:37:17.437667 7476 generic.go:334] "Generic (PLEG): container finished" podID="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" containerID="3fbcbabe96d1d538208df7fe6740297e7b936fd21409b810c6def759b3cb8301" exitCode=0 Mar 20 08:37:21.165497 master-0 kubenswrapper[7476]: E0320 08:37:21.165363 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:21.865737 master-0 kubenswrapper[7476]: E0320 08:37:21.865655 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:21.865737 master-0 kubenswrapper[7476]: I0320 08:37:21.865719 7476 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 08:37:22.184148 master-0 kubenswrapper[7476]: I0320 08:37:22.183957 7476 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-tdpfq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 20 08:37:22.184148 master-0 kubenswrapper[7476]: I0320 08:37:22.184047 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" podUID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 20 08:37:22.996562 master-0 kubenswrapper[7476]: I0320 08:37:22.996434 7476 status_manager.go:851] "Failed to get status for pod" podUID="08d9196b-b68f-421b-8754-bfbaa4020a97" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods catalogd-controller-manager-6864dc98f7-tf2gj)" Mar 20 08:37:23.487095 master-0 kubenswrapper[7476]: I0320 08:37:23.487036 7476 generic.go:334] "Generic (PLEG): container finished" podID="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" containerID="59a8653cd7835805f3353ca3030def7794cc3d5df739fff211964fc11ce38845" exitCode=0 Mar 20 08:37:27.259871 master-0 kubenswrapper[7476]: E0320 08:37:27.259654 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:37:27.260904 master-0 kubenswrapper[7476]: E0320 08:37:27.260767 7476 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.024s" Mar 20 08:37:27.260904 master-0 kubenswrapper[7476]: I0320 08:37:27.260880 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:37:27.261115 master-0 kubenswrapper[7476]: I0320 08:37:27.261065 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:37:27.261215 master-0 kubenswrapper[7476]: I0320 08:37:27.261119 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f"} Mar 20 08:37:27.262129 master-0 kubenswrapper[7476]: I0320 08:37:27.262001 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:37:27.262479 master-0 kubenswrapper[7476]: I0320 08:37:27.262423 7476 scope.go:117] "RemoveContainer" containerID="bf19448fe2db422f2021f6a9801b4117923acb1b2003982f366081b4de585441" Mar 20 08:37:27.267991 master-0 kubenswrapper[7476]: I0320 08:37:27.267927 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"d29e51560cf0f82adb12fe4ccbcc9c856b09e06e9a9c7dd6333b272f62625fb3"} pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 20 08:37:27.268145 master-0 kubenswrapper[7476]: I0320 08:37:27.268020 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" podUID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerName="authentication-operator" containerID="cri-o://d29e51560cf0f82adb12fe4ccbcc9c856b09e06e9a9c7dd6333b272f62625fb3" gracePeriod=30 Mar 20 08:37:27.278408 master-0 kubenswrapper[7476]: I0320 08:37:27.278312 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 20 08:37:27.288929 master-0 kubenswrapper[7476]: I0320 08:37:27.288850 7476 scope.go:117] "RemoveContainer" containerID="d7f4830141ed7d49d20e31769c038ca8340ad71b0bddea39298dca3d6416b345" Mar 20 08:37:27.516855 master-0 kubenswrapper[7476]: I0320 08:37:27.516596 7476 generic.go:334] "Generic (PLEG): container finished" podID="20ff930f-ec0d-40ed-a879-1546691f685d" containerID="d20a1459d97b6e06a1f2acdb938648d68b1fc12871ed4ca115c971b404c404f0" exitCode=0 Mar 20 08:37:27.521509 master-0 kubenswrapper[7476]: I0320 08:37:27.521464 7476 generic.go:334] "Generic (PLEG): container finished" podID="71ca96e8-5108-455c-bb3c-17977d38e912" containerID="546f50582d27b9704d91a180b620a54d25d194d6d958c834e126f15276d2a186" exitCode=0 Mar 20 08:37:27.524665 master-0 kubenswrapper[7476]: I0320 08:37:27.524610 7476 generic.go:334] "Generic (PLEG): container finished" podID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerID="d29e51560cf0f82adb12fe4ccbcc9c856b09e06e9a9c7dd6333b272f62625fb3" exitCode=0 Mar 20 08:37:27.885732 master-0 kubenswrapper[7476]: I0320 08:37:27.885619 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_cce21ae1-63de-49be-a027-084a101e650b/installer/0.log" Mar 20 08:37:27.886036 master-0 kubenswrapper[7476]: I0320 08:37:27.885772 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:37:27.975498 master-0 kubenswrapper[7476]: I0320 08:37:27.975389 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce21ae1-63de-49be-a027-084a101e650b-kube-api-access\") pod \"cce21ae1-63de-49be-a027-084a101e650b\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " Mar 20 08:37:27.975805 master-0 kubenswrapper[7476]: I0320 08:37:27.975514 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-kubelet-dir\") pod \"cce21ae1-63de-49be-a027-084a101e650b\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " Mar 20 08:37:27.975805 master-0 kubenswrapper[7476]: I0320 08:37:27.975584 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-var-lock\") pod \"cce21ae1-63de-49be-a027-084a101e650b\" (UID: \"cce21ae1-63de-49be-a027-084a101e650b\") " Mar 20 08:37:27.975805 master-0 kubenswrapper[7476]: I0320 08:37:27.975742 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cce21ae1-63de-49be-a027-084a101e650b" (UID: "cce21ae1-63de-49be-a027-084a101e650b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:37:27.976024 master-0 kubenswrapper[7476]: I0320 08:37:27.975768 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-var-lock" (OuterVolumeSpecName: "var-lock") pod "cce21ae1-63de-49be-a027-084a101e650b" (UID: "cce21ae1-63de-49be-a027-084a101e650b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:37:27.976024 master-0 kubenswrapper[7476]: I0320 08:37:27.975979 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:37:27.976024 master-0 kubenswrapper[7476]: I0320 08:37:27.976005 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cce21ae1-63de-49be-a027-084a101e650b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:37:27.978447 master-0 kubenswrapper[7476]: I0320 08:37:27.978378 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cce21ae1-63de-49be-a027-084a101e650b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cce21ae1-63de-49be-a027-084a101e650b" (UID: "cce21ae1-63de-49be-a027-084a101e650b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:37:28.077837 master-0 kubenswrapper[7476]: I0320 08:37:28.077722 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cce21ae1-63de-49be-a027-084a101e650b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:37:28.535787 master-0 kubenswrapper[7476]: I0320 08:37:28.535698 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_cce21ae1-63de-49be-a027-084a101e650b/installer/0.log" Mar 20 08:37:28.536773 master-0 kubenswrapper[7476]: I0320 08:37:28.535877 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:37:30.531475 master-0 kubenswrapper[7476]: E0320 08:37:30.531289 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-gk7zl.189e7fc6ddd1fa53 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-gk7zl,UID:d524ce06-8969-4b68-b236-9e11af55d854,APIVersion:v1,ResourceVersion:7305,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 22.59s (22.59s including waiting). Image size: 1252654287 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:36:35.332332115 +0000 UTC m=+76.301100641,LastTimestamp:2026-03-20 08:36:35.332332115 +0000 UTC m=+76.301100641,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:37:31.166621 master-0 kubenswrapper[7476]: E0320 08:37:31.166485 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:31.166621 master-0 kubenswrapper[7476]: E0320 08:37:31.166568 7476 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:37:31.867105 master-0 kubenswrapper[7476]: E0320 08:37:31.866998 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 20 08:37:33.565883 master-0 kubenswrapper[7476]: I0320 08:37:33.565728 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_84b1b51a-cbfa-42de-9fb8-315e9cb76b58/installer/0.log" Mar 20 08:37:33.565883 master-0 kubenswrapper[7476]: I0320 08:37:33.565802 7476 generic.go:334] "Generic (PLEG): container finished" podID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerID="9195f1dfc14cd53890895128ba6b2082162a13670d2ec403d7a28c0918592666" exitCode=1 Mar 20 08:37:35.396984 master-0 kubenswrapper[7476]: I0320 08:37:35.396874 7476 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-7x9vq container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Mar 20 08:37:35.397804 master-0 kubenswrapper[7476]: I0320 08:37:35.396970 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" podUID="fec3170d-3f3e-42f5-b20a-da53721c0dac" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Mar 20 08:37:36.187408 master-0 kubenswrapper[7476]: E0320 08:37:36.187347 7476 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 20 08:37:36.187408 master-0 kubenswrapper[7476]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_3ea52b89-46f9-4685-aecd-162ba92baaf5_0(7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0" Netns:"/var/run/netns/c9753205-2e47-4a74-8be4-f2503c732f19" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0;K8S_POD_UID=3ea52b89-46f9-4685-aecd-162ba92baaf5" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/3ea52b89-46f9-4685-aecd-162ba92baaf5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 20 08:37:36.187408 master-0 kubenswrapper[7476]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 08:37:36.187408 master-0 kubenswrapper[7476]: > Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: E0320 08:37:36.187430 7476 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_3ea52b89-46f9-4685-aecd-162ba92baaf5_0(7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0" Netns:"/var/run/netns/c9753205-2e47-4a74-8be4-f2503c732f19" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0;K8S_POD_UID=3ea52b89-46f9-4685-aecd-162ba92baaf5" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/3ea52b89-46f9-4685-aecd-162ba92baaf5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: E0320 08:37:36.187460 7476 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_3ea52b89-46f9-4685-aecd-162ba92baaf5_0(7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0" Netns:"/var/run/netns/c9753205-2e47-4a74-8be4-f2503c732f19" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0;K8S_POD_UID=3ea52b89-46f9-4685-aecd-162ba92baaf5" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/3ea52b89-46f9-4685-aecd-162ba92baaf5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:37:36.187692 master-0 kubenswrapper[7476]: E0320 08:37:36.187517 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(3ea52b89-46f9-4685-aecd-162ba92baaf5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(3ea52b89-46f9-4685-aecd-162ba92baaf5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_3ea52b89-46f9-4685-aecd-162ba92baaf5_0(7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0\\\" Netns:\\\"/var/run/netns/c9753205-2e47-4a74-8be4-f2503c732f19\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=7d5b620323b368b3e8632767a3337aafdae17a4b38db12a5986cd8eb068549e0;K8S_POD_UID=3ea52b89-46f9-4685-aecd-162ba92baaf5\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/3ea52b89-46f9-4685-aecd-162ba92baaf5]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" Mar 20 08:37:36.600653 master-0 kubenswrapper[7476]: I0320 08:37:36.600601 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:37:36.602152 master-0 kubenswrapper[7476]: I0320 08:37:36.602096 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:37:40.274715 master-0 kubenswrapper[7476]: E0320 08:37:40.274656 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 20 08:37:40.627881 master-0 kubenswrapper[7476]: I0320 08:37:40.627789 7476 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="cecb02a386370afc24432a3402093f12ea9e53ed8df8b02259c918fbec5ca271" exitCode=1 Mar 20 08:37:42.069885 master-0 kubenswrapper[7476]: E0320 08:37:42.069596 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="400ms" Mar 20 08:37:51.295619 master-0 kubenswrapper[7476]: E0320 08:37:51.295218 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:37:41Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:37:41Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:37:41Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:37:41Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4869c69128f74d9c3b178ea6c8c8d38df169b6bce05eb821a65f0aaf514c563a\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:f3c2ad90e251062165f8d6623ca4994c0b3e28324e4b5b17fd588b162ec97766\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1746912226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2abc1fd79e7781634ed5ed9e8f2b98b9094ea51f40ac3a773c5e5224607bf3d7\\\"],\\\"sizeBytes\\\":1637455533},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:6bec6e4ce9b3ff60658829df2f5980cf947458d49b97476cee1ff01ec638d309\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:b259d760a14ca994ed34d4cfc901758f180cc8d1de7f3c427baa68030e06b7c7\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1252654287},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a55ec7ec64efd0f595d8084377b7e463a1807829b7617e5d4a9092dcd924c36\\\"],\\\"sizeBytes\\\":1238100502},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:542b8eae269892bc6f0f9d5ab808afe26119a72e430870d81f59faf93c3f2f18\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d18953a151bb71fe6585017054b939c4062435242069a2601fe668009e6e5087\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1223740630},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:af0fe0ca926422a6471d5bf22fc0e682c36c24fba05496a3bdfac0b7d3733015\\\"],\\\"sizeBytes\\\":991832673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\\\"],\\\"sizeBytes\\\":943841779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b00c42562d477ef44d51f35950253a26d7debc7de86e53270831aafef5795c1\\\"],\\\"sizeBytes\\\":918289953},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0a09f5a3ba4f60cce0145769509bab92553c8075d210af4ac058965d2ae11efa\\\"],\\\"sizeBytes\\\":876160834},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6bba3f73c0066e42b24839e0d29f5dce2f36436f0a11f9f5e1029bccc5ed6578\\\"],\\\"sizeBytes\\\":862657321},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a9e8da5c6114f062b814936d4db7a47a04d248e160d6bb28ad4e4a081496ee4\\\"],\\\"sizeBytes\\\":772943435},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1e1faad2d9167d84e23585c1cea5962301845548043cf09578f943f79ca98016\\\"],\\\"sizeBytes\\\":687949580},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5e12e4dc52214d3ada5ba5106caebe079eac1d9292c2571a5fe83411ce8e900d\\\"],\\\"sizeBytes\\\":683195416},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:aa5e782406f71c048b1ac3a4bf5d1227ff4be81111114083ad4c7a209c6bfb5a\\\"],\\\"sizeBytes\\\":677942383},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ec8fd46dfb35ed10e8f98933166f69ce579c2f35b8db03d21e4c34fc544553e4\\\"],\\\"sizeBytes\\\":621648710},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae50e496bd6ae2d27298d997470b7cb0a426eeb8b7e2e9c7187a34cb03993998\\\"],\\\"sizeBytes\\\":589386806},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c6a4383333a1fd6d05c3f60ec793913f7937ee3d77f002d85e6c61e20507bf55\\\"],\\\"sizeBytes\\\":582154903},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c2dd7a03348212e49876f5359f233d893a541ed9b934df390201a05133a06982\\\"],\\\"sizeBytes\\\":558211175},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7af9f5c5af9d529840233ef4b519120cc0e3f14c4fe28cc43b0823f2c11d8f89\\\"],\\\"sizeBytes\\\":548752816},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e29dc9f042f2d0471171a0611070886cb2f7c57338ab7f112613417bcd33b278\\\"],\\\"sizeBytes\\\":529326739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b4c9cf268bb7abef7af187cd775d3f74d0bd33626250095428d53b705ee946\\\"],\\\"sizeBytes\\\":528956487},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5a971d5889f167cfe61a64c366424b87c17a6dc141ffcc43406cdcbb50cae2a\\\"],\\\"sizeBytes\\\":518384969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:59727c4b3fef19e5149675cf3350735bbfe2c6588a57654b2e4552dd719f58b1\\\"],\\\"sizeBytes\\\":517999161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\\\"],\\\"sizeBytes\\\":514984269},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bfe394b58ec6195de8b8420e781b7630d85a412b9112d892fea903f92b783427\\\"],\\\"sizeBytes\\\":513221333},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1f23bac0a2a6cfd638e4af679dc787a8790d99c391f6e2ade8087dc477ff765e\\\"],\\\"sizeBytes\\\":512274055},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:77fff570657d2fa0bfb709b2c8b6665bae0bf90a2be981d8dbca56c674715098\\\"],\\\"sizeBytes\\\":511227324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adb9f6f2fd701863c7caed747df43f83d3569ba9388cfa33ea7219ac6a606b11\\\"],\\\"sizeBytes\\\":511164375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c032f87ae61d6f757ff3ce52620a70a43516591987731f25da77aba152f17458\\\"],\\\"sizeBytes\\\":508888171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:812819a9d712b9e345ef5f1404b242c281e2518ad724baebc393ec0fd3b3d263\\\"],\\\"sizeBytes\\\":508544745},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:313d1d8ca85e65236a59f058a3316c49436dde691b3a3930d5bc5e3b4b8c8a71\\\"],\\\"sizeBytes\\\":507972093},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c527b4e8239a1f4f4e0a851113e7dd633b7dcb9d75b0e7b21c23d26304abcb3\\\"],\\\"sizeBytes\\\":506480167},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ef199844317b7b012879ed8d29f9b6bc37fad8a6fdb336103cbd5cabc74c4302\\\"],\\\"sizeBytes\\\":506395599},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7d4a034950346bcd4e36e9e2f1343e0cf7a10cf544963f33d09c7eb2a1bfc634\\\"],\\\"sizeBytes\\\":505345991},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1fbbcb390de2563a0177b92fba1b5a65777366e2dc80e2808b61d87c41b47a2d\\\"],\\\"sizeBytes\\\":505246690},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c983016b9ceed0fca1f51bd49c2653243c7e5af91cbf2f478b091db6e028252\\\"],\\\"sizeBytes\\\":504625081},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:712d334b7752d95580571059aae2c50e111d879af4fd8ea7cc3dbaf1a8e7dc69\\\"],\\\"sizeBytes\\\":495994673},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4b5ea1ef4e09b673a0c68c8848ca162ab11d9ac373a377daa52dea702ffa3023\\\"],\\\"sizeBytes\\\":495065340},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:446bedea4916d3c1ee52be94137e484659e9561bd1de95c8189eee279aae984b\\\"],\\\"sizeBytes\\\":487096305},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a746a87b784ea1caa278fd0e012554f9df520b6fff665ea0bc4c83f487fed113\\\"],\\\"sizeBytes\\\":484450894},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c4d5a681595e428ff4b5083648c13615eed80be9084a3d3fc68a0295079cb12\\\"],\\\"sizeBytes\\\":484187929},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f933312f49083e8746fc41ab5e46a9a757b448374f14971e256ebcb36f11dd97\\\"],\\\"sizeBytes\\\":470826739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ea5c8a93f30e0a4932da5697d22c0da7eda9a7035c0555eb006b6755e62bb2fc\\\"],\\\"sizeBytes\\\":468265024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d12d0dc7eb86bbedf6b2d7689a28fd51f0d928f720e4a6783744304297c661ed\\\"],\\\"sizeBytes\\\":465090934},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9609c00207cc4db97f0fd6162eb429d7f81654137f020a677e30cba26a887a24\\\"],\\\"sizeBytes\\\":463705930},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:632e80bba5077068ecca05fddb95aedebad4493af6f36152c01c6ae490975b62\\\"],\\\"sizeBytes\\\":458126937},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bcb08821551e9a5b9f82aa794bcea673279cefb93cb47492e19ccac5e2cf18fe\\\"],\\\"sizeBytes\\\":456576198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:759fb1d5353dbbadd443f38631d977ca3aed9787b873be05cc9660532a252739\\\"],\\\"sizeBytes\\\":448828620},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3062f6485aec4770e60852b535c69a42527b305161fe856499c8658ead6d1e85\\\"],\\\"sizeBytes\\\":448042136}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:37:52.471982 master-0 kubenswrapper[7476]: E0320 08:37:52.471898 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 20 08:38:01.282532 master-0 kubenswrapper[7476]: E0320 08:38:01.282438 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 20 08:38:01.283609 master-0 kubenswrapper[7476]: E0320 08:38:01.282723 7476 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Mar 20 08:38:01.294136 master-0 kubenswrapper[7476]: I0320 08:38:01.294063 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 20 08:38:01.295835 master-0 kubenswrapper[7476]: E0320 08:38:01.295783 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:38:02.813995 master-0 kubenswrapper[7476]: I0320 08:38:02.813868 7476 generic.go:334] "Generic (PLEG): container finished" podID="fec3170d-3f3e-42f5-b20a-da53721c0dac" containerID="606e62ca34e3d9e1001d8f531baa40a69abd238341d65870685ec9240a1791b0" exitCode=0 Mar 20 08:38:03.275444 master-0 kubenswrapper[7476]: E0320 08:38:03.275365 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 20 08:38:04.533796 master-0 kubenswrapper[7476]: E0320 08:38:04.533579 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-xn4s4.189e7fc6e33b6b87 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-xn4s4,UID:dd53f6c4-da30-4996-8b62-7dd1cd3a3e83,APIVersion:v1,ResourceVersion:7189,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 23.732s (23.732s including waiting). Image size: 1746912226 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:36:35.423128455 +0000 UTC m=+76.391897001,LastTimestamp:2026-03-20 08:36:35.423128455 +0000 UTC m=+76.391897001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:38:10.130806 master-0 kubenswrapper[7476]: E0320 08:38:10.130697 7476 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8.848s" Mar 20 08:38:10.131856 master-0 kubenswrapper[7476]: I0320 08:38:10.131478 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"cce21ae1-63de-49be-a027-084a101e650b","Type":"ContainerDied","Data":"08b76c47992e775acd809c6af275e2c7e9a0096419764ac5862de8d43565af46"} Mar 20 08:38:10.133955 master-0 kubenswrapper[7476]: I0320 08:38:10.133394 7476 scope.go:117] "RemoveContainer" containerID="c5f00c0d77211fa7340df0b5c9e4c67e0a0eeb68e81ac9de5effbf2d875c406e" Mar 20 08:38:10.133955 master-0 kubenswrapper[7476]: I0320 08:38:10.133571 7476 scope.go:117] "RemoveContainer" containerID="ae08cd7d4b99291a81168cf2f99395c5e971d107dc0502f7bea648e012bdeade" Mar 20 08:38:10.134236 master-0 kubenswrapper[7476]: I0320 08:38:10.134042 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:38:10.134236 master-0 kubenswrapper[7476]: I0320 08:38:10.134120 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerDied","Data":"38ba09231d63afd93a0205a5845a80e4d47fa8290768d886cf1c7ea448f682d8"} Mar 20 08:38:10.134236 master-0 kubenswrapper[7476]: I0320 08:38:10.134168 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerDied","Data":"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1"} Mar 20 08:38:10.134625 master-0 kubenswrapper[7476]: I0320 08:38:10.134425 7476 scope.go:117] "RemoveContainer" containerID="3fbcbabe96d1d538208df7fe6740297e7b936fd21409b810c6def759b3cb8301" Mar 20 08:38:10.134625 master-0 kubenswrapper[7476]: I0320 08:38:10.134592 7476 scope.go:117] "RemoveContainer" containerID="51d5ff19316ba50d65a137d07edaf8d44d3c66d7ea87669b610c77e6e7a5026d" Mar 20 08:38:10.140494 master-0 kubenswrapper[7476]: I0320 08:38:10.136646 7476 scope.go:117] "RemoveContainer" containerID="59a8653cd7835805f3353ca3030def7794cc3d5df739fff211964fc11ce38845" Mar 20 08:38:10.140494 master-0 kubenswrapper[7476]: I0320 08:38:10.136843 7476 scope.go:117] "RemoveContainer" containerID="d20a1459d97b6e06a1f2acdb938648d68b1fc12871ed4ca115c971b404c404f0" Mar 20 08:38:10.140494 master-0 kubenswrapper[7476]: I0320 08:38:10.138525 7476 scope.go:117] "RemoveContainer" containerID="38ba09231d63afd93a0205a5845a80e4d47fa8290768d886cf1c7ea448f682d8" Mar 20 08:38:10.165134 master-0 kubenswrapper[7476]: I0320 08:38:10.164860 7476 scope.go:117] "RemoveContainer" containerID="cecb02a386370afc24432a3402093f12ea9e53ed8df8b02259c918fbec5ca271" Mar 20 08:38:10.181220 master-0 kubenswrapper[7476]: I0320 08:38:10.181141 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185000 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerDied","Data":"51d5ff19316ba50d65a137d07edaf8d44d3c66d7ea87669b610c77e6e7a5026d"} Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185055 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerDied","Data":"ae08cd7d4b99291a81168cf2f99395c5e971d107dc0502f7bea648e012bdeade"} Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185106 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185127 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerDied","Data":"c5f00c0d77211fa7340df0b5c9e4c67e0a0eeb68e81ac9de5effbf2d875c406e"} Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185144 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185155 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerDied","Data":"3fbcbabe96d1d538208df7fe6740297e7b936fd21409b810c6def759b3cb8301"} Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185170 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerDied","Data":"59a8653cd7835805f3353ca3030def7794cc3d5df739fff211964fc11ce38845"} Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185185 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerDied","Data":"d20a1459d97b6e06a1f2acdb938648d68b1fc12871ed4ca115c971b404c404f0"} Mar 20 08:38:10.185180 master-0 kubenswrapper[7476]: I0320 08:38:10.185201 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerDied","Data":"546f50582d27b9704d91a180b620a54d25d194d6d958c834e126f15276d2a186"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185216 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerDied","Data":"d29e51560cf0f82adb12fe4ccbcc9c856b09e06e9a9c7dd6333b272f62625fb3"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185231 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerStarted","Data":"cbb3f75129c9d64cd795c59facd72277d5aa4e6c03360f86cd3b579cb2e915c3"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185244 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"cce21ae1-63de-49be-a027-084a101e650b","Type":"ContainerDied","Data":"ca41c67c83bd762137f7fd4b62a8f992e4f4eaa7271546ffae17c37b0db5004e"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185258 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca41c67c83bd762137f7fd4b62a8f992e4f4eaa7271546ffae17c37b0db5004e" Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185282 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"84b1b51a-cbfa-42de-9fb8-315e9cb76b58","Type":"ContainerDied","Data":"9195f1dfc14cd53890895128ba6b2082162a13670d2ec403d7a28c0918592666"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185297 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerDied","Data":"cecb02a386370afc24432a3402093f12ea9e53ed8df8b02259c918fbec5ca271"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185310 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185324 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185334 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185355 7476 scope.go:117] "RemoveContainer" containerID="9ab09f622201f872e04a1e8a769261f4a46a4d60637dffa9e2a3458905508cd2" Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185513 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185526 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"24b4ed170d527099878cb5fdd508a2fb","Type":"ContainerStarted","Data":"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f"} Mar 20 08:38:10.186126 master-0 kubenswrapper[7476]: I0320 08:38:10.185538 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerDied","Data":"606e62ca34e3d9e1001d8f531baa40a69abd238341d65870685ec9240a1791b0"} Mar 20 08:38:10.187513 master-0 kubenswrapper[7476]: I0320 08:38:10.186775 7476 scope.go:117] "RemoveContainer" containerID="546f50582d27b9704d91a180b620a54d25d194d6d958c834e126f15276d2a186" Mar 20 08:38:10.187513 master-0 kubenswrapper[7476]: I0320 08:38:10.187253 7476 scope.go:117] "RemoveContainer" containerID="606e62ca34e3d9e1001d8f531baa40a69abd238341d65870685ec9240a1791b0" Mar 20 08:38:10.199504 master-0 kubenswrapper[7476]: I0320 08:38:10.198064 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:38:10.231004 master-0 kubenswrapper[7476]: I0320 08:38:10.230959 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 20 08:38:10.231004 master-0 kubenswrapper[7476]: I0320 08:38:10.230998 7476 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="ebff24bb-f729-4bd9-b8fb-9e43ce32a944" Mar 20 08:38:10.247382 master-0 kubenswrapper[7476]: I0320 08:38:10.239657 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 20 08:38:10.247382 master-0 kubenswrapper[7476]: I0320 08:38:10.239752 7476 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="ebff24bb-f729-4bd9-b8fb-9e43ce32a944" Mar 20 08:38:10.247382 master-0 kubenswrapper[7476]: I0320 08:38:10.239782 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 20 08:38:10.247382 master-0 kubenswrapper[7476]: I0320 08:38:10.240548 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wqxn7" podStartSLOduration=85.276032928 podStartE2EDuration="2m1.240510566s" podCreationTimestamp="2026-03-20 08:36:09 +0000 UTC" firstStartedPulling="2026-03-20 08:36:11.721158873 +0000 UTC m=+52.689927389" lastFinishedPulling="2026-03-20 08:36:47.685636491 +0000 UTC m=+88.654405027" observedRunningTime="2026-03-20 08:38:10.19483933 +0000 UTC m=+171.163607856" watchObservedRunningTime="2026-03-20 08:38:10.240510566 +0000 UTC m=+171.209279102" Mar 20 08:38:10.284207 master-0 kubenswrapper[7476]: I0320 08:38:10.282821 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gk7zl" podStartSLOduration=85.543938217 podStartE2EDuration="1m59.282795316s" podCreationTimestamp="2026-03-20 08:36:11 +0000 UTC" firstStartedPulling="2026-03-20 08:36:12.741575776 +0000 UTC m=+53.710344302" lastFinishedPulling="2026-03-20 08:36:46.480432865 +0000 UTC m=+87.449201401" observedRunningTime="2026-03-20 08:38:10.277704423 +0000 UTC m=+171.246472959" watchObservedRunningTime="2026-03-20 08:38:10.282795316 +0000 UTC m=+171.251563862" Mar 20 08:38:10.349091 master-0 kubenswrapper[7476]: I0320 08:38:10.349023 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 20 08:38:10.352165 master-0 kubenswrapper[7476]: I0320 08:38:10.352127 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 20 08:38:10.388340 master-0 kubenswrapper[7476]: I0320 08:38:10.386634 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lkqww" podStartSLOduration=85.516075395 podStartE2EDuration="1m58.386617539s" podCreationTimestamp="2026-03-20 08:36:12 +0000 UTC" firstStartedPulling="2026-03-20 08:36:14.845899474 +0000 UTC m=+55.814667990" lastFinishedPulling="2026-03-20 08:36:47.716441608 +0000 UTC m=+88.685210134" observedRunningTime="2026-03-20 08:38:10.383844451 +0000 UTC m=+171.352612997" watchObservedRunningTime="2026-03-20 08:38:10.386617539 +0000 UTC m=+171.355386075" Mar 20 08:38:10.428602 master-0 kubenswrapper[7476]: I0320 08:38:10.428537 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xn4s4" podStartSLOduration=85.640631455 podStartE2EDuration="2m0.428513239s" podCreationTimestamp="2026-03-20 08:36:10 +0000 UTC" firstStartedPulling="2026-03-20 08:36:11.690474138 +0000 UTC m=+52.659242664" lastFinishedPulling="2026-03-20 08:36:46.478355912 +0000 UTC m=+87.447124448" observedRunningTime="2026-03-20 08:38:10.421620844 +0000 UTC m=+171.390389380" watchObservedRunningTime="2026-03-20 08:38:10.428513239 +0000 UTC m=+171.397281775" Mar 20 08:38:10.557385 master-0 kubenswrapper[7476]: I0320 08:38:10.552230 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 20 08:38:10.589396 master-0 kubenswrapper[7476]: I0320 08:38:10.588971 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 20 08:38:10.590886 master-0 kubenswrapper[7476]: I0320 08:38:10.590815 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 20 08:38:10.880689 master-0 kubenswrapper[7476]: I0320 08:38:10.880648 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerStarted","Data":"1c8cedded5d5abf2eccbffe1fbbc3baf4454b4117f7fd84b851525034c732747"} Mar 20 08:38:10.886470 master-0 kubenswrapper[7476]: I0320 08:38:10.886384 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerStarted","Data":"f61b725a79fff556468b0126e41778d167b8a31ec8526a9c664ab434b3c33c45"} Mar 20 08:38:10.889186 master-0 kubenswrapper[7476]: I0320 08:38:10.889150 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerStarted","Data":"36c39e8f2f6bf69b1f66f4972d7671c5d3fca0023fda940a2b8538766e8e200d"} Mar 20 08:38:10.891543 master-0 kubenswrapper[7476]: I0320 08:38:10.891464 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-x4w25_9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/network-operator/0.log" Mar 20 08:38:10.891808 master-0 kubenswrapper[7476]: I0320 08:38:10.891775 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerStarted","Data":"07d5eea0c0cfb0e4a4276e2ddf85f3db59e86b2664aa6c609113a0a0c2df000a"} Mar 20 08:38:10.893065 master-0 kubenswrapper[7476]: I0320 08:38:10.893045 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerStarted","Data":"eb83d7b52ee34a208a7d7d8320582445204a3a3c9a564d3c4ad584270b43c58c"} Mar 20 08:38:10.899080 master-0 kubenswrapper[7476]: I0320 08:38:10.899044 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"46f265536aba6292ead501bc9b49f327","Type":"ContainerStarted","Data":"1599b02e47e2ea84fbce4395522bc8e26c32b95a49f745d9bd324ecad71aaa11"} Mar 20 08:38:10.901549 master-0 kubenswrapper[7476]: I0320 08:38:10.901492 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerStarted","Data":"cf612c2c87dec99e0f687ab2c295fa164bf4e78d5be8c83e36f46fc0677adeb9"} Mar 20 08:38:10.906517 master-0 kubenswrapper[7476]: I0320 08:38:10.906384 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerStarted","Data":"faf09e106d65c4571e61ba7edd1e3e65e2581a35b5e358479da5c8fdd5be26ac"} Mar 20 08:38:10.909409 master-0 kubenswrapper[7476]: I0320 08:38:10.909348 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3ea52b89-46f9-4685-aecd-162ba92baaf5","Type":"ContainerStarted","Data":"5e7daf3466466f866a8a609c3357214ad22e67b72e11f87494389948c897e7d2"} Mar 20 08:38:10.909467 master-0 kubenswrapper[7476]: I0320 08:38:10.909408 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3ea52b89-46f9-4685-aecd-162ba92baaf5","Type":"ContainerStarted","Data":"97ecc9dbe142a6967704accd994983e2161bceb749ddbf66e1756c81c1a78964"} Mar 20 08:38:10.912467 master-0 kubenswrapper[7476]: I0320 08:38:10.912377 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerStarted","Data":"9b538e53e002b24081578246c7d675b101b228304a8e87c5077457c1455c343d"} Mar 20 08:38:10.915976 master-0 kubenswrapper[7476]: I0320 08:38:10.915934 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/0.log" Mar 20 08:38:10.916354 master-0 kubenswrapper[7476]: I0320 08:38:10.916322 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4"} Mar 20 08:38:11.153404 master-0 kubenswrapper[7476]: I0320 08:38:11.153183 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_84b1b51a-cbfa-42de-9fb8-315e9cb76b58/installer/0.log" Mar 20 08:38:11.153404 master-0 kubenswrapper[7476]: I0320 08:38:11.153285 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:38:11.205810 master-0 kubenswrapper[7476]: I0320 08:38:11.205739 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=110.205719972 podStartE2EDuration="1m50.205719972s" podCreationTimestamp="2026-03-20 08:36:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:38:11.202650865 +0000 UTC m=+172.171419391" watchObservedRunningTime="2026-03-20 08:38:11.205719972 +0000 UTC m=+172.174488498" Mar 20 08:38:11.220306 master-0 kubenswrapper[7476]: I0320 08:38:11.220077 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kubelet-dir\") pod \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " Mar 20 08:38:11.220306 master-0 kubenswrapper[7476]: I0320 08:38:11.220150 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kube-api-access\") pod \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " Mar 20 08:38:11.220306 master-0 kubenswrapper[7476]: I0320 08:38:11.220177 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-var-lock\") pod \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\" (UID: \"84b1b51a-cbfa-42de-9fb8-315e9cb76b58\") " Mar 20 08:38:11.220553 master-0 kubenswrapper[7476]: I0320 08:38:11.220458 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-var-lock" (OuterVolumeSpecName: "var-lock") pod "84b1b51a-cbfa-42de-9fb8-315e9cb76b58" (UID: "84b1b51a-cbfa-42de-9fb8-315e9cb76b58"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:11.221296 master-0 kubenswrapper[7476]: I0320 08:38:11.221229 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84b1b51a-cbfa-42de-9fb8-315e9cb76b58" (UID: "84b1b51a-cbfa-42de-9fb8-315e9cb76b58"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:11.230748 master-0 kubenswrapper[7476]: I0320 08:38:11.230706 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84b1b51a-cbfa-42de-9fb8-315e9cb76b58" (UID: "84b1b51a-cbfa-42de-9fb8-315e9cb76b58"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:11.251112 master-0 kubenswrapper[7476]: I0320 08:38:11.251051 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" path="/var/lib/kubelet/pods/5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad/volumes" Mar 20 08:38:11.251610 master-0 kubenswrapper[7476]: I0320 08:38:11.251585 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c80fea3f-9ac4-4060-bb90-19f9de724299" path="/var/lib/kubelet/pods/c80fea3f-9ac4-4060-bb90-19f9de724299/volumes" Mar 20 08:38:11.298287 master-0 kubenswrapper[7476]: E0320 08:38:11.297992 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:38:11.330334 master-0 kubenswrapper[7476]: I0320 08:38:11.329935 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:11.330334 master-0 kubenswrapper[7476]: I0320 08:38:11.330009 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:11.330334 master-0 kubenswrapper[7476]: I0320 08:38:11.330032 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84b1b51a-cbfa-42de-9fb8-315e9cb76b58-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:11.469073 master-0 kubenswrapper[7476]: I0320 08:38:11.468882 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqxn7"] Mar 20 08:38:11.469373 master-0 kubenswrapper[7476]: I0320 08:38:11.469106 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wqxn7" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="registry-server" containerID="cri-o://533ca83f2f1cbe90843aea19e67a25f7a7f9cb27edbc66c29caae3aa94a291f5" gracePeriod=2 Mar 20 08:38:11.477330 master-0 kubenswrapper[7476]: I0320 08:38:11.477231 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkqww"] Mar 20 08:38:11.477703 master-0 kubenswrapper[7476]: I0320 08:38:11.477642 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lkqww" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="registry-server" containerID="cri-o://eb8676cf9ada7cfd2023b1d3e8aade0f49b7cd5e26fda65bc3000bbdf4f17e73" gracePeriod=2 Mar 20 08:38:11.924951 master-0 kubenswrapper[7476]: I0320 08:38:11.924894 7476 generic.go:334] "Generic (PLEG): container finished" podID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerID="533ca83f2f1cbe90843aea19e67a25f7a7f9cb27edbc66c29caae3aa94a291f5" exitCode=0 Mar 20 08:38:11.925112 master-0 kubenswrapper[7476]: I0320 08:38:11.924976 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqxn7" event={"ID":"ccb242ff-347a-4b02-8d9e-ba4dd62a5052","Type":"ContainerDied","Data":"533ca83f2f1cbe90843aea19e67a25f7a7f9cb27edbc66c29caae3aa94a291f5"} Mar 20 08:38:11.925112 master-0 kubenswrapper[7476]: I0320 08:38:11.925017 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wqxn7" event={"ID":"ccb242ff-347a-4b02-8d9e-ba4dd62a5052","Type":"ContainerDied","Data":"a8d37d40354efe71efcd0dcc6439359fea82c7340e68c0fdbc32fd28b903eadc"} Mar 20 08:38:11.925112 master-0 kubenswrapper[7476]: I0320 08:38:11.925032 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d37d40354efe71efcd0dcc6439359fea82c7340e68c0fdbc32fd28b903eadc" Mar 20 08:38:11.927169 master-0 kubenswrapper[7476]: I0320 08:38:11.927124 7476 generic.go:334] "Generic (PLEG): container finished" podID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerID="eb8676cf9ada7cfd2023b1d3e8aade0f49b7cd5e26fda65bc3000bbdf4f17e73" exitCode=0 Mar 20 08:38:11.927287 master-0 kubenswrapper[7476]: I0320 08:38:11.927173 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerDied","Data":"eb8676cf9ada7cfd2023b1d3e8aade0f49b7cd5e26fda65bc3000bbdf4f17e73"} Mar 20 08:38:11.928548 master-0 kubenswrapper[7476]: I0320 08:38:11.928509 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_84b1b51a-cbfa-42de-9fb8-315e9cb76b58/installer/0.log" Mar 20 08:38:11.929305 master-0 kubenswrapper[7476]: I0320 08:38:11.928908 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"84b1b51a-cbfa-42de-9fb8-315e9cb76b58","Type":"ContainerDied","Data":"84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec"} Mar 20 08:38:11.929305 master-0 kubenswrapper[7476]: I0320 08:38:11.928996 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec" Mar 20 08:38:11.929305 master-0 kubenswrapper[7476]: I0320 08:38:11.929151 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:38:11.949892 master-0 kubenswrapper[7476]: I0320 08:38:11.949826 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:11.953425 master-0 kubenswrapper[7476]: I0320 08:38:11.953373 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:38:11.954294 master-0 kubenswrapper[7476]: I0320 08:38:11.954227 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:12.004419 master-0 kubenswrapper[7476]: I0320 08:38:12.004374 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:38:12.039713 master-0 kubenswrapper[7476]: I0320 08:38:12.039682 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-utilities\") pod \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " Mar 20 08:38:12.040053 master-0 kubenswrapper[7476]: I0320 08:38:12.040037 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8cds\" (UniqueName: \"kubernetes.io/projected/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-kube-api-access-m8cds\") pod \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " Mar 20 08:38:12.040181 master-0 kubenswrapper[7476]: I0320 08:38:12.040168 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-catalog-content\") pod \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\" (UID: \"ccb242ff-347a-4b02-8d9e-ba4dd62a5052\") " Mar 20 08:38:12.040910 master-0 kubenswrapper[7476]: I0320 08:38:12.040863 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-utilities" (OuterVolumeSpecName: "utilities") pod "ccb242ff-347a-4b02-8d9e-ba4dd62a5052" (UID: "ccb242ff-347a-4b02-8d9e-ba4dd62a5052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:12.041324 master-0 kubenswrapper[7476]: I0320 08:38:12.041300 7476 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-utilities\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:12.043484 master-0 kubenswrapper[7476]: I0320 08:38:12.043448 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-kube-api-access-m8cds" (OuterVolumeSpecName: "kube-api-access-m8cds") pod "ccb242ff-347a-4b02-8d9e-ba4dd62a5052" (UID: "ccb242ff-347a-4b02-8d9e-ba4dd62a5052"). InnerVolumeSpecName "kube-api-access-m8cds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:12.066498 master-0 kubenswrapper[7476]: I0320 08:38:12.066383 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccb242ff-347a-4b02-8d9e-ba4dd62a5052" (UID: "ccb242ff-347a-4b02-8d9e-ba4dd62a5052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:12.142727 master-0 kubenswrapper[7476]: I0320 08:38:12.142664 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-utilities\") pod \"2b557b11-593d-4886-a9e3-ac4d18f901aa\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " Mar 20 08:38:12.142932 master-0 kubenswrapper[7476]: I0320 08:38:12.142753 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghf64\" (UniqueName: \"kubernetes.io/projected/2b557b11-593d-4886-a9e3-ac4d18f901aa-kube-api-access-ghf64\") pod \"2b557b11-593d-4886-a9e3-ac4d18f901aa\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " Mar 20 08:38:12.142932 master-0 kubenswrapper[7476]: I0320 08:38:12.142893 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-catalog-content\") pod \"2b557b11-593d-4886-a9e3-ac4d18f901aa\" (UID: \"2b557b11-593d-4886-a9e3-ac4d18f901aa\") " Mar 20 08:38:12.143178 master-0 kubenswrapper[7476]: I0320 08:38:12.143153 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8cds\" (UniqueName: \"kubernetes.io/projected/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-kube-api-access-m8cds\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:12.143178 master-0 kubenswrapper[7476]: I0320 08:38:12.143177 7476 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccb242ff-347a-4b02-8d9e-ba4dd62a5052-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:12.144101 master-0 kubenswrapper[7476]: I0320 08:38:12.144025 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-utilities" (OuterVolumeSpecName: "utilities") pod "2b557b11-593d-4886-a9e3-ac4d18f901aa" (UID: "2b557b11-593d-4886-a9e3-ac4d18f901aa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:12.146323 master-0 kubenswrapper[7476]: I0320 08:38:12.146234 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b557b11-593d-4886-a9e3-ac4d18f901aa-kube-api-access-ghf64" (OuterVolumeSpecName: "kube-api-access-ghf64") pod "2b557b11-593d-4886-a9e3-ac4d18f901aa" (UID: "2b557b11-593d-4886-a9e3-ac4d18f901aa"). InnerVolumeSpecName "kube-api-access-ghf64". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:12.184617 master-0 kubenswrapper[7476]: I0320 08:38:12.184586 7476 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-tdpfq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 20 08:38:12.185043 master-0 kubenswrapper[7476]: I0320 08:38:12.185015 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" podUID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 20 08:38:12.203977 master-0 kubenswrapper[7476]: I0320 08:38:12.203894 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2b557b11-593d-4886-a9e3-ac4d18f901aa" (UID: "2b557b11-593d-4886-a9e3-ac4d18f901aa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:12.244871 master-0 kubenswrapper[7476]: I0320 08:38:12.244845 7476 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:12.245089 master-0 kubenswrapper[7476]: I0320 08:38:12.245034 7476 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b557b11-593d-4886-a9e3-ac4d18f901aa-utilities\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:12.245208 master-0 kubenswrapper[7476]: I0320 08:38:12.245194 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghf64\" (UniqueName: \"kubernetes.io/projected/2b557b11-593d-4886-a9e3-ac4d18f901aa-kube-api-access-ghf64\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:12.937917 master-0 kubenswrapper[7476]: I0320 08:38:12.937830 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lkqww" event={"ID":"2b557b11-593d-4886-a9e3-ac4d18f901aa","Type":"ContainerDied","Data":"3a9ea316413890b86eda70cb3bad759bee7fb3d758edade9e33048a017412911"} Mar 20 08:38:12.938375 master-0 kubenswrapper[7476]: I0320 08:38:12.937950 7476 scope.go:117] "RemoveContainer" containerID="eb8676cf9ada7cfd2023b1d3e8aade0f49b7cd5e26fda65bc3000bbdf4f17e73" Mar 20 08:38:12.938375 master-0 kubenswrapper[7476]: I0320 08:38:12.937986 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wqxn7" Mar 20 08:38:12.938375 master-0 kubenswrapper[7476]: I0320 08:38:12.937953 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lkqww" Mar 20 08:38:12.938612 master-0 kubenswrapper[7476]: I0320 08:38:12.937986 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:12.964162 master-0 kubenswrapper[7476]: I0320 08:38:12.964093 7476 scope.go:117] "RemoveContainer" containerID="ee2a491741aaab17c9397840a0a9333d49e8c0024dc7d2841f844485424c0ff4" Mar 20 08:38:12.988350 master-0 kubenswrapper[7476]: I0320 08:38:12.988299 7476 scope.go:117] "RemoveContainer" containerID="85a3b86ab4f57de1c35c94769c8b9923fb92d6e2e095f2cfa081de97d22d6a77" Mar 20 08:38:13.022426 master-0 kubenswrapper[7476]: I0320 08:38:13.022355 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqxn7"] Mar 20 08:38:13.027122 master-0 kubenswrapper[7476]: I0320 08:38:13.026939 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wqxn7"] Mar 20 08:38:13.042951 master-0 kubenswrapper[7476]: I0320 08:38:13.042877 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lkqww"] Mar 20 08:38:13.053642 master-0 kubenswrapper[7476]: I0320 08:38:13.047477 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lkqww"] Mar 20 08:38:13.246935 master-0 kubenswrapper[7476]: I0320 08:38:13.246750 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" path="/var/lib/kubelet/pods/2b557b11-593d-4886-a9e3-ac4d18f901aa/volumes" Mar 20 08:38:13.247991 master-0 kubenswrapper[7476]: I0320 08:38:13.247938 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" path="/var/lib/kubelet/pods/ccb242ff-347a-4b02-8d9e-ba4dd62a5052/volumes" Mar 20 08:38:13.955351 master-0 kubenswrapper[7476]: I0320 08:38:13.955166 7476 generic.go:334] "Generic (PLEG): container finished" podID="23003a2f-2053-47cc-8133-23eb886d4da0" containerID="cc3c2a9c1f06758b9cf8e7a0bffe7eec7cabce777c5e4901ed4f712103ea4ff6" exitCode=0 Mar 20 08:38:13.955351 master-0 kubenswrapper[7476]: I0320 08:38:13.955312 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerDied","Data":"cc3c2a9c1f06758b9cf8e7a0bffe7eec7cabce777c5e4901ed4f712103ea4ff6"} Mar 20 08:38:13.957396 master-0 kubenswrapper[7476]: I0320 08:38:13.957329 7476 scope.go:117] "RemoveContainer" containerID="cc3c2a9c1f06758b9cf8e7a0bffe7eec7cabce777c5e4901ed4f712103ea4ff6" Mar 20 08:38:14.877794 master-0 kubenswrapper[7476]: E0320 08:38:14.877714 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 20 08:38:14.967641 master-0 kubenswrapper[7476]: I0320 08:38:14.967389 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerStarted","Data":"a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb"} Mar 20 08:38:14.968032 master-0 kubenswrapper[7476]: I0320 08:38:14.967983 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:38:14.970184 master-0 kubenswrapper[7476]: I0320 08:38:14.970134 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:38:15.552090 master-0 kubenswrapper[7476]: I0320 08:38:15.551989 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 20 08:38:15.577426 master-0 kubenswrapper[7476]: I0320 08:38:15.577369 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 20 08:38:17.962372 master-0 kubenswrapper[7476]: I0320 08:38:17.962243 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:38:17.993997 master-0 kubenswrapper[7476]: I0320 08:38:17.993964 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/1.log" Mar 20 08:38:17.995767 master-0 kubenswrapper[7476]: I0320 08:38:17.995715 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/0.log" Mar 20 08:38:17.995852 master-0 kubenswrapper[7476]: I0320 08:38:17.995799 7476 generic.go:334] "Generic (PLEG): container finished" podID="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" containerID="574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af" exitCode=255 Mar 20 08:38:17.995967 master-0 kubenswrapper[7476]: I0320 08:38:17.995931 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerDied","Data":"574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af"} Mar 20 08:38:17.996078 master-0 kubenswrapper[7476]: I0320 08:38:17.996056 7476 scope.go:117] "RemoveContainer" containerID="254f8acc157dece685517f93e40a5d981d3cb093e1a077345ec886e180445eaa" Mar 20 08:38:17.996923 master-0 kubenswrapper[7476]: I0320 08:38:17.996906 7476 scope.go:117] "RemoveContainer" containerID="574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af" Mar 20 08:38:17.997252 master-0 kubenswrapper[7476]: E0320 08:38:17.997226 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8c94f4649-w8c24_openshift-controller-manager-operator(61ab4d32-c732-4be5-aa85-a2e1dd21cb60)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" podUID="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" Mar 20 08:38:18.019456 master-0 kubenswrapper[7476]: E0320 08:38:18.019401 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 20 08:38:18.021572 master-0 kubenswrapper[7476]: I0320 08:38:18.021509 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 20 08:38:18.052014 master-0 kubenswrapper[7476]: I0320 08:38:18.051897 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=1.051876602 podStartE2EDuration="1.051876602s" podCreationTimestamp="2026-03-20 08:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:38:18.046688005 +0000 UTC m=+179.015456571" watchObservedRunningTime="2026-03-20 08:38:18.051876602 +0000 UTC m=+179.020645138" Mar 20 08:38:19.004412 master-0 kubenswrapper[7476]: I0320 08:38:19.004297 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/1.log" Mar 20 08:38:19.007530 master-0 kubenswrapper[7476]: I0320 08:38:19.007497 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/0.log" Mar 20 08:38:19.007597 master-0 kubenswrapper[7476]: I0320 08:38:19.007562 7476 generic.go:334] "Generic (PLEG): container finished" podID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" containerID="e11ba7fccf3e3a03d9b7498dc0eb1bc10a9a5dcbb92c598146672eeafb4b1b79" exitCode=1 Mar 20 08:38:19.008485 master-0 kubenswrapper[7476]: I0320 08:38:19.008456 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerDied","Data":"e11ba7fccf3e3a03d9b7498dc0eb1bc10a9a5dcbb92c598146672eeafb4b1b79"} Mar 20 08:38:19.008792 master-0 kubenswrapper[7476]: I0320 08:38:19.008767 7476 scope.go:117] "RemoveContainer" containerID="e11ba7fccf3e3a03d9b7498dc0eb1bc10a9a5dcbb92c598146672eeafb4b1b79" Mar 20 08:38:20.014950 master-0 kubenswrapper[7476]: I0320 08:38:20.014902 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/0.log" Mar 20 08:38:20.015617 master-0 kubenswrapper[7476]: I0320 08:38:20.014985 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"aa474287c12f1d850a021861c8d0d4c93567ca052c1e7dd4ff6b75e56d25a823"} Mar 20 08:38:22.030121 master-0 kubenswrapper[7476]: I0320 08:38:22.030092 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/0.log" Mar 20 08:38:22.030652 master-0 kubenswrapper[7476]: I0320 08:38:22.030630 7476 generic.go:334] "Generic (PLEG): container finished" podID="f202273a-b111-46ce-b404-7e481d2c7ff9" containerID="8c083804959a88c9c849b428e0b936db72af00ecf148631a285d481d8c54097f" exitCode=1 Mar 20 08:38:22.030758 master-0 kubenswrapper[7476]: I0320 08:38:22.030741 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerDied","Data":"8c083804959a88c9c849b428e0b936db72af00ecf148631a285d481d8c54097f"} Mar 20 08:38:22.031229 master-0 kubenswrapper[7476]: I0320 08:38:22.031191 7476 scope.go:117] "RemoveContainer" containerID="8c083804959a88c9c849b428e0b936db72af00ecf148631a285d481d8c54097f" Mar 20 08:38:22.042473 master-0 kubenswrapper[7476]: I0320 08:38:22.042420 7476 generic.go:334] "Generic (PLEG): container finished" podID="210dd7f0-d1c0-407a-b89b-f11ef605e5df" containerID="536065a4d8759d271003b36465db4bd4965a5a320e8baa9df238dec6c8adc25f" exitCode=0 Mar 20 08:38:22.042473 master-0 kubenswrapper[7476]: I0320 08:38:22.042470 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerDied","Data":"536065a4d8759d271003b36465db4bd4965a5a320e8baa9df238dec6c8adc25f"} Mar 20 08:38:22.043095 master-0 kubenswrapper[7476]: I0320 08:38:22.043058 7476 scope.go:117] "RemoveContainer" containerID="536065a4d8759d271003b36465db4bd4965a5a320e8baa9df238dec6c8adc25f" Mar 20 08:38:23.059358 master-0 kubenswrapper[7476]: I0320 08:38:23.059237 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f"} Mar 20 08:38:23.088493 master-0 kubenswrapper[7476]: I0320 08:38:23.085132 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/0.log" Mar 20 08:38:23.088493 master-0 kubenswrapper[7476]: I0320 08:38:23.085198 7476 generic.go:334] "Generic (PLEG): container finished" podID="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" containerID="853a1945138b3e0ff5252845780fd6a6c7275529314ebd23a219d848ce919728" exitCode=1 Mar 20 08:38:23.088493 master-0 kubenswrapper[7476]: I0320 08:38:23.085285 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerDied","Data":"853a1945138b3e0ff5252845780fd6a6c7275529314ebd23a219d848ce919728"} Mar 20 08:38:23.088493 master-0 kubenswrapper[7476]: I0320 08:38:23.085843 7476 scope.go:117] "RemoveContainer" containerID="853a1945138b3e0ff5252845780fd6a6c7275529314ebd23a219d848ce919728" Mar 20 08:38:23.094172 master-0 kubenswrapper[7476]: I0320 08:38:23.093561 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/0.log" Mar 20 08:38:23.094172 master-0 kubenswrapper[7476]: I0320 08:38:23.093650 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"4dddcd2acd552341d271a22bee7c13f8c1ffb72ebb5cc9441b8049d3b6ece1f7"} Mar 20 08:38:23.487981 master-0 kubenswrapper[7476]: I0320 08:38:23.487909 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:38:23.487981 master-0 kubenswrapper[7476]: I0320 08:38:23.487960 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:38:24.107661 master-0 kubenswrapper[7476]: I0320 08:38:24.107593 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/0.log" Mar 20 08:38:24.108534 master-0 kubenswrapper[7476]: I0320 08:38:24.107693 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b"} Mar 20 08:38:24.108534 master-0 kubenswrapper[7476]: I0320 08:38:24.107970 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:38:24.112471 master-0 kubenswrapper[7476]: I0320 08:38:24.112421 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/0.log" Mar 20 08:38:24.113451 master-0 kubenswrapper[7476]: I0320 08:38:24.113351 7476 generic.go:334] "Generic (PLEG): container finished" podID="08d9196b-b68f-421b-8754-bfbaa4020a97" containerID="2c241c5cc1bda01a54d125786cac6f467e2e7cd45da3764b80c745165babdd10" exitCode=1 Mar 20 08:38:24.113587 master-0 kubenswrapper[7476]: I0320 08:38:24.113456 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerDied","Data":"2c241c5cc1bda01a54d125786cac6f467e2e7cd45da3764b80c745165babdd10"} Mar 20 08:38:24.114181 master-0 kubenswrapper[7476]: I0320 08:38:24.114118 7476 scope.go:117] "RemoveContainer" containerID="2c241c5cc1bda01a54d125786cac6f467e2e7cd45da3764b80c745165babdd10" Mar 20 08:38:25.125054 master-0 kubenswrapper[7476]: I0320 08:38:25.125004 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/0.log" Mar 20 08:38:25.126778 master-0 kubenswrapper[7476]: I0320 08:38:25.126726 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449"} Mar 20 08:38:25.127548 master-0 kubenswrapper[7476]: I0320 08:38:25.127497 7476 status_manager.go:317] "Container readiness changed for unknown container" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" containerID="cri-o://2c241c5cc1bda01a54d125786cac6f467e2e7cd45da3764b80c745165babdd10" Mar 20 08:38:25.127712 master-0 kubenswrapper[7476]: I0320 08:38:25.127552 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:38:25.160382 master-0 kubenswrapper[7476]: I0320 08:38:25.160249 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xn4s4"] Mar 20 08:38:25.160868 master-0 kubenswrapper[7476]: I0320 08:38:25.160681 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xn4s4" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="registry-server" containerID="cri-o://05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566" gracePeriod=2 Mar 20 08:38:25.380028 master-0 kubenswrapper[7476]: I0320 08:38:25.373965 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gk7zl"] Mar 20 08:38:25.380028 master-0 kubenswrapper[7476]: I0320 08:38:25.374741 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gk7zl" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="registry-server" containerID="cri-o://d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072" gracePeriod=2 Mar 20 08:38:25.603880 master-0 kubenswrapper[7476]: I0320 08:38:25.603835 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:38:25.638954 master-0 kubenswrapper[7476]: I0320 08:38:25.638761 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-utilities\") pod \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " Mar 20 08:38:25.639496 master-0 kubenswrapper[7476]: I0320 08:38:25.639470 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-catalog-content\") pod \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " Mar 20 08:38:25.639806 master-0 kubenswrapper[7476]: I0320 08:38:25.639785 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wglt6\" (UniqueName: \"kubernetes.io/projected/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-kube-api-access-wglt6\") pod \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\" (UID: \"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83\") " Mar 20 08:38:25.639956 master-0 kubenswrapper[7476]: I0320 08:38:25.639817 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-utilities" (OuterVolumeSpecName: "utilities") pod "dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" (UID: "dd53f6c4-da30-4996-8b62-7dd1cd3a3e83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:25.640313 master-0 kubenswrapper[7476]: I0320 08:38:25.640291 7476 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-utilities\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:25.652416 master-0 kubenswrapper[7476]: I0320 08:38:25.643285 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-kube-api-access-wglt6" (OuterVolumeSpecName: "kube-api-access-wglt6") pod "dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" (UID: "dd53f6c4-da30-4996-8b62-7dd1cd3a3e83"). InnerVolumeSpecName "kube-api-access-wglt6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:25.741163 master-0 kubenswrapper[7476]: I0320 08:38:25.741113 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wglt6\" (UniqueName: \"kubernetes.io/projected/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-kube-api-access-wglt6\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:25.744409 master-0 kubenswrapper[7476]: I0320 08:38:25.744368 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:38:25.802971 master-0 kubenswrapper[7476]: I0320 08:38:25.802837 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" (UID: "dd53f6c4-da30-4996-8b62-7dd1cd3a3e83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:25.842490 master-0 kubenswrapper[7476]: I0320 08:38:25.842408 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-utilities\") pod \"d524ce06-8969-4b68-b236-9e11af55d854\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " Mar 20 08:38:25.842930 master-0 kubenswrapper[7476]: I0320 08:38:25.842911 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-catalog-content\") pod \"d524ce06-8969-4b68-b236-9e11af55d854\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " Mar 20 08:38:25.843106 master-0 kubenswrapper[7476]: I0320 08:38:25.843088 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m85d8\" (UniqueName: \"kubernetes.io/projected/d524ce06-8969-4b68-b236-9e11af55d854-kube-api-access-m85d8\") pod \"d524ce06-8969-4b68-b236-9e11af55d854\" (UID: \"d524ce06-8969-4b68-b236-9e11af55d854\") " Mar 20 08:38:25.843480 master-0 kubenswrapper[7476]: I0320 08:38:25.843408 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-utilities" (OuterVolumeSpecName: "utilities") pod "d524ce06-8969-4b68-b236-9e11af55d854" (UID: "d524ce06-8969-4b68-b236-9e11af55d854"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:25.843798 master-0 kubenswrapper[7476]: I0320 08:38:25.843756 7476 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:25.843922 master-0 kubenswrapper[7476]: I0320 08:38:25.843908 7476 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-utilities\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:25.848387 master-0 kubenswrapper[7476]: I0320 08:38:25.848315 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d524ce06-8969-4b68-b236-9e11af55d854-kube-api-access-m85d8" (OuterVolumeSpecName: "kube-api-access-m85d8") pod "d524ce06-8969-4b68-b236-9e11af55d854" (UID: "d524ce06-8969-4b68-b236-9e11af55d854"). InnerVolumeSpecName "kube-api-access-m85d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:25.914626 master-0 kubenswrapper[7476]: I0320 08:38:25.914494 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d524ce06-8969-4b68-b236-9e11af55d854" (UID: "d524ce06-8969-4b68-b236-9e11af55d854"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:38:25.946852 master-0 kubenswrapper[7476]: I0320 08:38:25.946765 7476 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d524ce06-8969-4b68-b236-9e11af55d854-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:25.946944 master-0 kubenswrapper[7476]: I0320 08:38:25.946852 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m85d8\" (UniqueName: \"kubernetes.io/projected/d524ce06-8969-4b68-b236-9e11af55d854-kube-api-access-m85d8\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:26.137004 master-0 kubenswrapper[7476]: I0320 08:38:26.136918 7476 generic.go:334] "Generic (PLEG): container finished" podID="d524ce06-8969-4b68-b236-9e11af55d854" containerID="d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072" exitCode=0 Mar 20 08:38:26.138109 master-0 kubenswrapper[7476]: I0320 08:38:26.137048 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gk7zl" Mar 20 08:38:26.138415 master-0 kubenswrapper[7476]: I0320 08:38:26.137048 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerDied","Data":"d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072"} Mar 20 08:38:26.138500 master-0 kubenswrapper[7476]: I0320 08:38:26.138453 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gk7zl" event={"ID":"d524ce06-8969-4b68-b236-9e11af55d854","Type":"ContainerDied","Data":"1985b1a6319773e5ed4a45821d2506fd38d71d7d03306a2e7817a73b2d10bb76"} Mar 20 08:38:26.138551 master-0 kubenswrapper[7476]: I0320 08:38:26.138502 7476 scope.go:117] "RemoveContainer" containerID="d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072" Mar 20 08:38:26.142186 master-0 kubenswrapper[7476]: I0320 08:38:26.142132 7476 generic.go:334] "Generic (PLEG): container finished" podID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerID="eacf5e052b386f63888a6a9a4f2ed8b8355f388306364efeef7926bdd5d16f5e" exitCode=0 Mar 20 08:38:26.142326 master-0 kubenswrapper[7476]: I0320 08:38:26.142230 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerDied","Data":"eacf5e052b386f63888a6a9a4f2ed8b8355f388306364efeef7926bdd5d16f5e"} Mar 20 08:38:26.143113 master-0 kubenswrapper[7476]: I0320 08:38:26.143062 7476 scope.go:117] "RemoveContainer" containerID="eacf5e052b386f63888a6a9a4f2ed8b8355f388306364efeef7926bdd5d16f5e" Mar 20 08:38:26.146078 master-0 kubenswrapper[7476]: I0320 08:38:26.145982 7476 generic.go:334] "Generic (PLEG): container finished" podID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerID="05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566" exitCode=0 Mar 20 08:38:26.146164 master-0 kubenswrapper[7476]: I0320 08:38:26.146115 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerDied","Data":"05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566"} Mar 20 08:38:26.146215 master-0 kubenswrapper[7476]: I0320 08:38:26.146164 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xn4s4" Mar 20 08:38:26.146321 master-0 kubenswrapper[7476]: I0320 08:38:26.146176 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xn4s4" event={"ID":"dd53f6c4-da30-4996-8b62-7dd1cd3a3e83","Type":"ContainerDied","Data":"c446a250610f8c7824af123712e193acaf406cfdca4a6c66b51ec566b654cfbe"} Mar 20 08:38:26.146858 master-0 kubenswrapper[7476]: I0320 08:38:26.146812 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:38:26.172472 master-0 kubenswrapper[7476]: I0320 08:38:26.172348 7476 scope.go:117] "RemoveContainer" containerID="8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed" Mar 20 08:38:26.207645 master-0 kubenswrapper[7476]: I0320 08:38:26.207583 7476 scope.go:117] "RemoveContainer" containerID="2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165" Mar 20 08:38:26.232535 master-0 kubenswrapper[7476]: I0320 08:38:26.232477 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gk7zl"] Mar 20 08:38:26.243955 master-0 kubenswrapper[7476]: I0320 08:38:26.240930 7476 scope.go:117] "RemoveContainer" containerID="d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072" Mar 20 08:38:26.243955 master-0 kubenswrapper[7476]: E0320 08:38:26.241521 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072\": container with ID starting with d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072 not found: ID does not exist" containerID="d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072" Mar 20 08:38:26.243955 master-0 kubenswrapper[7476]: I0320 08:38:26.241572 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072"} err="failed to get container status \"d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072\": rpc error: code = NotFound desc = could not find container \"d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072\": container with ID starting with d8d830bd913980db126aefcd5a60bb50bffad937ac41bd21c7bf1bdd91159072 not found: ID does not exist" Mar 20 08:38:26.243955 master-0 kubenswrapper[7476]: I0320 08:38:26.241611 7476 scope.go:117] "RemoveContainer" containerID="8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed" Mar 20 08:38:26.246508 master-0 kubenswrapper[7476]: E0320 08:38:26.245430 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed\": container with ID starting with 8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed not found: ID does not exist" containerID="8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed" Mar 20 08:38:26.246508 master-0 kubenswrapper[7476]: I0320 08:38:26.245522 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed"} err="failed to get container status \"8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed\": rpc error: code = NotFound desc = could not find container \"8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed\": container with ID starting with 8696bc65930127b31620daafef27c3323c7b4f9c0c33f88f46bc857900b46eed not found: ID does not exist" Mar 20 08:38:26.246508 master-0 kubenswrapper[7476]: I0320 08:38:26.245571 7476 scope.go:117] "RemoveContainer" containerID="2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165" Mar 20 08:38:26.256786 master-0 kubenswrapper[7476]: E0320 08:38:26.246951 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165\": container with ID starting with 2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165 not found: ID does not exist" containerID="2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165" Mar 20 08:38:26.256786 master-0 kubenswrapper[7476]: I0320 08:38:26.246991 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165"} err="failed to get container status \"2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165\": rpc error: code = NotFound desc = could not find container \"2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165\": container with ID starting with 2e01a6b82832050496eb17a93069d8e5fc7d5ddaf26b41a017109d4f463cf165 not found: ID does not exist" Mar 20 08:38:26.256786 master-0 kubenswrapper[7476]: I0320 08:38:26.247020 7476 scope.go:117] "RemoveContainer" containerID="05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566" Mar 20 08:38:26.256786 master-0 kubenswrapper[7476]: I0320 08:38:26.247332 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gk7zl"] Mar 20 08:38:26.284189 master-0 kubenswrapper[7476]: I0320 08:38:26.284152 7476 scope.go:117] "RemoveContainer" containerID="a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be" Mar 20 08:38:26.302088 master-0 kubenswrapper[7476]: I0320 08:38:26.302059 7476 scope.go:117] "RemoveContainer" containerID="de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870" Mar 20 08:38:26.318662 master-0 kubenswrapper[7476]: I0320 08:38:26.318521 7476 scope.go:117] "RemoveContainer" containerID="05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566" Mar 20 08:38:26.319136 master-0 kubenswrapper[7476]: E0320 08:38:26.319091 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566\": container with ID starting with 05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566 not found: ID does not exist" containerID="05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566" Mar 20 08:38:26.319211 master-0 kubenswrapper[7476]: I0320 08:38:26.319162 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566"} err="failed to get container status \"05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566\": rpc error: code = NotFound desc = could not find container \"05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566\": container with ID starting with 05a576b0c597def785feb6536eb1af49c4fecbaf14a1c579b1a7e4fa454af566 not found: ID does not exist" Mar 20 08:38:26.319249 master-0 kubenswrapper[7476]: I0320 08:38:26.319220 7476 scope.go:117] "RemoveContainer" containerID="a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be" Mar 20 08:38:26.319926 master-0 kubenswrapper[7476]: E0320 08:38:26.319803 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be\": container with ID starting with a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be not found: ID does not exist" containerID="a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be" Mar 20 08:38:26.319926 master-0 kubenswrapper[7476]: I0320 08:38:26.319835 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be"} err="failed to get container status \"a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be\": rpc error: code = NotFound desc = could not find container \"a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be\": container with ID starting with a8f65c5944156fe3d57c8a40bff0f27e03b981c865990c024c56cd98be9080be not found: ID does not exist" Mar 20 08:38:26.319926 master-0 kubenswrapper[7476]: I0320 08:38:26.319862 7476 scope.go:117] "RemoveContainer" containerID="de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870" Mar 20 08:38:26.320242 master-0 kubenswrapper[7476]: E0320 08:38:26.320194 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870\": container with ID starting with de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870 not found: ID does not exist" containerID="de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870" Mar 20 08:38:26.320242 master-0 kubenswrapper[7476]: I0320 08:38:26.320221 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870"} err="failed to get container status \"de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870\": rpc error: code = NotFound desc = could not find container \"de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870\": container with ID starting with de289710339938d039ce2491465fd287023cf1577b81fe60b0d4294e17745870 not found: ID does not exist" Mar 20 08:38:26.338719 master-0 kubenswrapper[7476]: I0320 08:38:26.338654 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xn4s4"] Mar 20 08:38:26.341063 master-0 kubenswrapper[7476]: I0320 08:38:26.341017 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xn4s4"] Mar 20 08:38:27.154277 master-0 kubenswrapper[7476]: I0320 08:38:27.154180 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerStarted","Data":"b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b"} Mar 20 08:38:27.155297 master-0 kubenswrapper[7476]: I0320 08:38:27.154746 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:38:27.159893 master-0 kubenswrapper[7476]: I0320 08:38:27.159850 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:38:27.245298 master-0 kubenswrapper[7476]: I0320 08:38:27.245158 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d524ce06-8969-4b68-b236-9e11af55d854" path="/var/lib/kubelet/pods/d524ce06-8969-4b68-b236-9e11af55d854/volumes" Mar 20 08:38:27.246234 master-0 kubenswrapper[7476]: I0320 08:38:27.246171 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" path="/var/lib/kubelet/pods/dd53f6c4-da30-4996-8b62-7dd1cd3a3e83/volumes" Mar 20 08:38:28.757884 master-0 kubenswrapper[7476]: I0320 08:38:28.757830 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:32.992318 master-0 kubenswrapper[7476]: I0320 08:38:32.992202 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:38:33.246828 master-0 kubenswrapper[7476]: I0320 08:38:33.246517 7476 scope.go:117] "RemoveContainer" containerID="574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af" Mar 20 08:38:33.491983 master-0 kubenswrapper[7476]: I0320 08:38:33.491869 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:38:34.216530 master-0 kubenswrapper[7476]: I0320 08:38:34.216434 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/1.log" Mar 20 08:38:34.216530 master-0 kubenswrapper[7476]: I0320 08:38:34.216515 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerStarted","Data":"3621d7f7293e781b2d6fe9b7f21003a8c6b2d5ad582ef3317c995b2d1b65c2ca"} Mar 20 08:38:35.226445 master-0 kubenswrapper[7476]: I0320 08:38:35.226374 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/0.log" Mar 20 08:38:35.227240 master-0 kubenswrapper[7476]: I0320 08:38:35.226483 7476 generic.go:334] "Generic (PLEG): container finished" podID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" containerID="3165ad3f4e3423cb37420a9aeda1215c8c5bbcc445272eb7b11a146edfa5a4f0" exitCode=1 Mar 20 08:38:35.227240 master-0 kubenswrapper[7476]: I0320 08:38:35.226556 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerDied","Data":"3165ad3f4e3423cb37420a9aeda1215c8c5bbcc445272eb7b11a146edfa5a4f0"} Mar 20 08:38:35.227498 master-0 kubenswrapper[7476]: I0320 08:38:35.227338 7476 scope.go:117] "RemoveContainer" containerID="3165ad3f4e3423cb37420a9aeda1215c8c5bbcc445272eb7b11a146edfa5a4f0" Mar 20 08:38:36.236119 master-0 kubenswrapper[7476]: I0320 08:38:36.236049 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/0.log" Mar 20 08:38:36.236802 master-0 kubenswrapper[7476]: I0320 08:38:36.236141 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"5997f3136ed0533c039dd0e30c51e5f693f57ff7a1981a4e954ad4ffb2ba2c02"} Mar 20 08:38:39.302205 master-0 kubenswrapper[7476]: I0320 08:38:39.302107 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-chfj7"] Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302410 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302429 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302445 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302452 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302463 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302471 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302478 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302487 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302497 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302505 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302516 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302523 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302532 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce21ae1-63de-49be-a027-084a101e650b" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302539 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce21ae1-63de-49be-a027-084a101e650b" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302550 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302557 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302569 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302577 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302586 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302594 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302607 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302617 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302627 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302634 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302643 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302653 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302665 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302675 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302686 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302693 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="extract-utilities" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302707 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302714 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="extract-content" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: E0320 08:38:39.302726 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c80fea3f-9ac4-4060-bb90-19f9de724299" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302734 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c80fea3f-9ac4-4060-bb90-19f9de724299" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302833 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302854 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd53f6c4-da30-4996-8b62-7dd1cd3a3e83" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302864 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b557b11-593d-4886-a9e3-ac4d18f901aa" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302877 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="d524ce06-8969-4b68-b236-9e11af55d854" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302889 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccb242ff-347a-4b02-8d9e-ba4dd62a5052" containerName="registry-server" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302898 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce21ae1-63de-49be-a027-084a101e650b" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302908 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302919 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="5aaa6cb1-ce0d-4cd5-a6dc-b27e9f6b13ad" containerName="installer" Mar 20 08:38:39.303235 master-0 kubenswrapper[7476]: I0320 08:38:39.302927 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c80fea3f-9ac4-4060-bb90-19f9de724299" containerName="installer" Mar 20 08:38:39.307190 master-0 kubenswrapper[7476]: I0320 08:38:39.303766 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.307190 master-0 kubenswrapper[7476]: I0320 08:38:39.305058 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5tl"] Mar 20 08:38:39.312119 master-0 kubenswrapper[7476]: I0320 08:38:39.307409 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.312119 master-0 kubenswrapper[7476]: I0320 08:38:39.309413 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-clrp2"] Mar 20 08:38:39.312119 master-0 kubenswrapper[7476]: I0320 08:38:39.309714 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-wsbtn" Mar 20 08:38:39.312119 master-0 kubenswrapper[7476]: I0320 08:38:39.311021 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.312697 master-0 kubenswrapper[7476]: I0320 08:38:39.312649 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-2vc5h" Mar 20 08:38:39.312783 master-0 kubenswrapper[7476]: I0320 08:38:39.312708 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bt7wn"] Mar 20 08:38:39.314304 master-0 kubenswrapper[7476]: I0320 08:38:39.314168 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-svsqb" Mar 20 08:38:39.315712 master-0 kubenswrapper[7476]: I0320 08:38:39.314536 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.317098 master-0 kubenswrapper[7476]: I0320 08:38:39.317045 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-qgtl7" Mar 20 08:38:39.329834 master-0 kubenswrapper[7476]: I0320 08:38:39.327161 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chfj7"] Mar 20 08:38:39.357403 master-0 kubenswrapper[7476]: I0320 08:38:39.338716 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5tl"] Mar 20 08:38:39.357403 master-0 kubenswrapper[7476]: I0320 08:38:39.348058 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt7wn"] Mar 20 08:38:39.357403 master-0 kubenswrapper[7476]: I0320 08:38:39.348714 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-clrp2"] Mar 20 08:38:39.433402 master-0 kubenswrapper[7476]: I0320 08:38:39.433361 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm9l9\" (UniqueName: \"kubernetes.io/projected/4f6c819a-5074-4d29-84c8-e187528ad757-kube-api-access-mm9l9\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.433684 master-0 kubenswrapper[7476]: I0320 08:38:39.433661 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-utilities\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.433917 master-0 kubenswrapper[7476]: I0320 08:38:39.433871 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-catalog-content\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.433986 master-0 kubenswrapper[7476]: I0320 08:38:39.433933 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-utilities\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.434060 master-0 kubenswrapper[7476]: I0320 08:38:39.434036 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-catalog-content\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.434131 master-0 kubenswrapper[7476]: I0320 08:38:39.434076 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vm9c\" (UniqueName: \"kubernetes.io/projected/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-kube-api-access-4vm9c\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.434178 master-0 kubenswrapper[7476]: I0320 08:38:39.434132 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w8xs\" (UniqueName: \"kubernetes.io/projected/64d09f81-5fb6-462a-a736-5649779a6b1a-kube-api-access-7w8xs\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.434178 master-0 kubenswrapper[7476]: I0320 08:38:39.434153 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-utilities\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.434178 master-0 kubenswrapper[7476]: I0320 08:38:39.434171 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmssd\" (UniqueName: \"kubernetes.io/projected/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-kube-api-access-zmssd\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.434312 master-0 kubenswrapper[7476]: I0320 08:38:39.434255 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-catalog-content\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.434362 master-0 kubenswrapper[7476]: I0320 08:38:39.434318 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-utilities\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.434362 master-0 kubenswrapper[7476]: I0320 08:38:39.434346 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-catalog-content\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.535792 master-0 kubenswrapper[7476]: I0320 08:38:39.535723 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-catalog-content\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.535792 master-0 kubenswrapper[7476]: I0320 08:38:39.535796 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9l9\" (UniqueName: \"kubernetes.io/projected/4f6c819a-5074-4d29-84c8-e187528ad757-kube-api-access-mm9l9\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.535843 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-utilities\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.535869 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-catalog-content\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.535892 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-utilities\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.535925 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-catalog-content\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.535949 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vm9c\" (UniqueName: \"kubernetes.io/projected/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-kube-api-access-4vm9c\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.535977 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w8xs\" (UniqueName: \"kubernetes.io/projected/64d09f81-5fb6-462a-a736-5649779a6b1a-kube-api-access-7w8xs\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.536002 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-utilities\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.536033 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmssd\" (UniqueName: \"kubernetes.io/projected/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-kube-api-access-zmssd\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.536066 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-catalog-content\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.536168 master-0 kubenswrapper[7476]: I0320 08:38:39.536095 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-utilities\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.536625 master-0 kubenswrapper[7476]: I0320 08:38:39.536597 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-utilities\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.536879 master-0 kubenswrapper[7476]: I0320 08:38:39.536848 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-catalog-content\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.537649 master-0 kubenswrapper[7476]: I0320 08:38:39.537621 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-catalog-content\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.537710 master-0 kubenswrapper[7476]: I0320 08:38:39.537619 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-utilities\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.538358 master-0 kubenswrapper[7476]: I0320 08:38:39.538332 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-utilities\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.538444 master-0 kubenswrapper[7476]: I0320 08:38:39.538350 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-catalog-content\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.538547 master-0 kubenswrapper[7476]: I0320 08:38:39.538508 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-utilities\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.540579 master-0 kubenswrapper[7476]: I0320 08:38:39.540514 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-catalog-content\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.556003 master-0 kubenswrapper[7476]: I0320 08:38:39.555902 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmssd\" (UniqueName: \"kubernetes.io/projected/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-kube-api-access-zmssd\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:39.559706 master-0 kubenswrapper[7476]: I0320 08:38:39.559645 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9l9\" (UniqueName: \"kubernetes.io/projected/4f6c819a-5074-4d29-84c8-e187528ad757-kube-api-access-mm9l9\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.566668 master-0 kubenswrapper[7476]: I0320 08:38:39.566624 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w8xs\" (UniqueName: \"kubernetes.io/projected/64d09f81-5fb6-462a-a736-5649779a6b1a-kube-api-access-7w8xs\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.570192 master-0 kubenswrapper[7476]: I0320 08:38:39.570147 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vm9c\" (UniqueName: \"kubernetes.io/projected/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-kube-api-access-4vm9c\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.663458 master-0 kubenswrapper[7476]: I0320 08:38:39.663378 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:39.687859 master-0 kubenswrapper[7476]: I0320 08:38:39.687797 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:39.712365 master-0 kubenswrapper[7476]: I0320 08:38:39.711367 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:39.724906 master-0 kubenswrapper[7476]: I0320 08:38:39.724826 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:40.149653 master-0 kubenswrapper[7476]: I0320 08:38:40.148057 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-chfj7"] Mar 20 08:38:40.152306 master-0 kubenswrapper[7476]: I0320 08:38:40.152235 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hj5tl"] Mar 20 08:38:40.157927 master-0 kubenswrapper[7476]: W0320 08:38:40.157866 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb43744de_5bc3_4f1d_91a4_c54e2a3a7ffc.slice/crio-e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3 WatchSource:0}: Error finding container e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3: Status 404 returned error can't find the container with id e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3 Mar 20 08:38:40.160297 master-0 kubenswrapper[7476]: W0320 08:38:40.160233 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d09f81_5fb6_462a_a736_5649779a6b1a.slice/crio-d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e WatchSource:0}: Error finding container d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e: Status 404 returned error can't find the container with id d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e Mar 20 08:38:40.261492 master-0 kubenswrapper[7476]: I0320 08:38:40.261447 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bt7wn"] Mar 20 08:38:40.269831 master-0 kubenswrapper[7476]: I0320 08:38:40.269644 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-clrp2"] Mar 20 08:38:40.270304 master-0 kubenswrapper[7476]: I0320 08:38:40.270193 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerStarted","Data":"e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3"} Mar 20 08:38:40.271914 master-0 kubenswrapper[7476]: I0320 08:38:40.271854 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerStarted","Data":"d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e"} Mar 20 08:38:40.289014 master-0 kubenswrapper[7476]: W0320 08:38:40.288968 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4f6c819a_5074_4d29_84c8_e187528ad757.slice/crio-91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09 WatchSource:0}: Error finding container 91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09: Status 404 returned error can't find the container with id 91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09 Mar 20 08:38:41.169295 master-0 kubenswrapper[7476]: I0320 08:38:41.168317 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2"] Mar 20 08:38:41.169295 master-0 kubenswrapper[7476]: I0320 08:38:41.169147 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9"] Mar 20 08:38:41.169987 master-0 kubenswrapper[7476]: I0320 08:38:41.169606 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t"] Mar 20 08:38:41.173283 master-0 kubenswrapper[7476]: I0320 08:38:41.170147 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.173283 master-0 kubenswrapper[7476]: I0320 08:38:41.170534 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.173283 master-0 kubenswrapper[7476]: I0320 08:38:41.170822 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.173283 master-0 kubenswrapper[7476]: I0320 08:38:41.170981 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl"] Mar 20 08:38:41.173283 master-0 kubenswrapper[7476]: I0320 08:38:41.171617 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.186513 master-0 kubenswrapper[7476]: I0320 08:38:41.181472 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:38:41.186513 master-0 kubenswrapper[7476]: I0320 08:38:41.181654 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.188833 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189060 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-rqpg6" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189171 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-8kh8p" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189303 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189422 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189520 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189722 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-fxvgv" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189875 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.189991 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190092 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-hcrbd" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190247 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190359 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190466 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190568 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190666 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190757 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 20 08:38:41.190867 master-0 kubenswrapper[7476]: I0320 08:38:41.190852 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 20 08:38:41.191439 master-0 kubenswrapper[7476]: I0320 08:38:41.190940 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 08:38:41.191439 master-0 kubenswrapper[7476]: I0320 08:38:41.191026 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 08:38:41.199025 master-0 kubenswrapper[7476]: I0320 08:38:41.197397 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc"] Mar 20 08:38:41.199025 master-0 kubenswrapper[7476]: I0320 08:38:41.198101 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.199025 master-0 kubenswrapper[7476]: I0320 08:38:41.198831 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9"] Mar 20 08:38:41.203348 master-0 kubenswrapper[7476]: I0320 08:38:41.200673 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl"] Mar 20 08:38:41.208284 master-0 kubenswrapper[7476]: I0320 08:38:41.205110 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 20 08:38:41.208284 master-0 kubenswrapper[7476]: I0320 08:38:41.205345 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-tp6tv" Mar 20 08:38:41.253350 master-0 kubenswrapper[7476]: I0320 08:38:41.253300 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc"] Mar 20 08:38:41.270285 master-0 kubenswrapper[7476]: I0320 08:38:41.269244 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.270285 master-0 kubenswrapper[7476]: I0320 08:38:41.269311 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.270285 master-0 kubenswrapper[7476]: I0320 08:38:41.269343 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.270285 master-0 kubenswrapper[7476]: I0320 08:38:41.269361 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.270285 master-0 kubenswrapper[7476]: I0320 08:38:41.269385 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-config\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.270285 master-0 kubenswrapper[7476]: I0320 08:38:41.269406 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szz6r\" (UniqueName: \"kubernetes.io/projected/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-kube-api-access-szz6r\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.281537 master-0 kubenswrapper[7476]: I0320 08:38:41.280787 7476 generic.go:334] "Generic (PLEG): container finished" podID="64d09f81-5fb6-462a-a736-5649779a6b1a" containerID="91b43fcbdd5ca279c1c93dfa907a3ddb56ecb16c22e3a3346f458ab45ff2c368" exitCode=0 Mar 20 08:38:41.281537 master-0 kubenswrapper[7476]: I0320 08:38:41.280890 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerDied","Data":"91b43fcbdd5ca279c1c93dfa907a3ddb56ecb16c22e3a3346f458ab45ff2c368"} Mar 20 08:38:41.300231 master-0 kubenswrapper[7476]: I0320 08:38:41.299129 7476 generic.go:334] "Generic (PLEG): container finished" podID="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" containerID="c9a695d4652da7db7f3ebcef0da143cf28a9dbbbb25aee4013a1e44bb00f1e39" exitCode=0 Mar 20 08:38:41.300231 master-0 kubenswrapper[7476]: I0320 08:38:41.299323 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerDied","Data":"c9a695d4652da7db7f3ebcef0da143cf28a9dbbbb25aee4013a1e44bb00f1e39"} Mar 20 08:38:41.300231 master-0 kubenswrapper[7476]: I0320 08:38:41.299398 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerStarted","Data":"6b78ee1b02c98b4ad9c3b944fdd43e9881371557e0d7b10564d5be8bd02396af"} Mar 20 08:38:41.301794 master-0 kubenswrapper[7476]: I0320 08:38:41.301387 7476 generic.go:334] "Generic (PLEG): container finished" podID="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" containerID="31eea8f8908cce83a9e43c16d0440c72175117897d7cc72e9c66a228fb48965a" exitCode=0 Mar 20 08:38:41.301794 master-0 kubenswrapper[7476]: I0320 08:38:41.301474 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerDied","Data":"31eea8f8908cce83a9e43c16d0440c72175117897d7cc72e9c66a228fb48965a"} Mar 20 08:38:41.302420 master-0 kubenswrapper[7476]: I0320 08:38:41.302014 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb"] Mar 20 08:38:41.303324 master-0 kubenswrapper[7476]: I0320 08:38:41.303299 7476 generic.go:334] "Generic (PLEG): container finished" podID="4f6c819a-5074-4d29-84c8-e187528ad757" containerID="b058c3dbb12dfe93f678a1cd234084a98f5f906462ebd3bf89f71382d647769f" exitCode=0 Mar 20 08:38:41.303388 master-0 kubenswrapper[7476]: I0320 08:38:41.303326 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.303466 master-0 kubenswrapper[7476]: I0320 08:38:41.303329 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerDied","Data":"b058c3dbb12dfe93f678a1cd234084a98f5f906462ebd3bf89f71382d647769f"} Mar 20 08:38:41.303514 master-0 kubenswrapper[7476]: I0320 08:38:41.303489 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerStarted","Data":"91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09"} Mar 20 08:38:41.305627 master-0 kubenswrapper[7476]: I0320 08:38:41.305214 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 08:38:41.305627 master-0 kubenswrapper[7476]: I0320 08:38:41.305221 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 08:38:41.305627 master-0 kubenswrapper[7476]: I0320 08:38:41.305254 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t5n84" Mar 20 08:38:41.305627 master-0 kubenswrapper[7476]: I0320 08:38:41.305417 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 08:38:41.317379 master-0 kubenswrapper[7476]: I0320 08:38:41.317317 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm"] Mar 20 08:38:41.318311 master-0 kubenswrapper[7476]: I0320 08:38:41.318220 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.319630 master-0 kubenswrapper[7476]: I0320 08:38:41.319530 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-2zrgl" Mar 20 08:38:41.320065 master-0 kubenswrapper[7476]: I0320 08:38:41.319536 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 20 08:38:41.326442 master-0 kubenswrapper[7476]: I0320 08:38:41.320241 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 20 08:38:41.343419 master-0 kubenswrapper[7476]: I0320 08:38:41.340556 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6"] Mar 20 08:38:41.343419 master-0 kubenswrapper[7476]: I0320 08:38:41.341662 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.352506 master-0 kubenswrapper[7476]: I0320 08:38:41.350703 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 08:38:41.352506 master-0 kubenswrapper[7476]: I0320 08:38:41.351093 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-w58m2" Mar 20 08:38:41.354630 master-0 kubenswrapper[7476]: I0320 08:38:41.353700 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb"] Mar 20 08:38:41.367377 master-0 kubenswrapper[7476]: I0320 08:38:41.367312 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm"] Mar 20 08:38:41.370244 master-0 kubenswrapper[7476]: I0320 08:38:41.370202 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-config\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370257 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmph\" (UniqueName: \"kubernetes.io/projected/581a8be2-d16c-4fd8-b051-214bd60a2a91-kube-api-access-ssmph\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370334 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szz6r\" (UniqueName: \"kubernetes.io/projected/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-kube-api-access-szz6r\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370362 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370387 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370409 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/704f15dd-f0d9-40a7-8918-15b7568a9df6-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370433 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79xs6\" (UniqueName: \"kubernetes.io/projected/704f15dd-f0d9-40a7-8918-15b7568a9df6-kube-api-access-79xs6\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370452 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/704f15dd-f0d9-40a7-8918-15b7568a9df6-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370490 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370513 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370530 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.370546 master-0 kubenswrapper[7476]: I0320 08:38:41.370549 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.370861 master-0 kubenswrapper[7476]: I0320 08:38:41.370578 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.370861 master-0 kubenswrapper[7476]: I0320 08:38:41.370596 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.370861 master-0 kubenswrapper[7476]: I0320 08:38:41.370613 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns97v\" (UniqueName: \"kubernetes.io/projected/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-kube-api-access-ns97v\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.370861 master-0 kubenswrapper[7476]: I0320 08:38:41.370630 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbnx\" (UniqueName: \"kubernetes.io/projected/56970553-2ac8-4cb5-a12a-b7c1e777c587-kube-api-access-zrbnx\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.370861 master-0 kubenswrapper[7476]: I0320 08:38:41.370831 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-config\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.374397 master-0 kubenswrapper[7476]: I0320 08:38:41.371541 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-auth-proxy-config\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.374397 master-0 kubenswrapper[7476]: I0320 08:38:41.371945 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6"] Mar 20 08:38:41.374397 master-0 kubenswrapper[7476]: I0320 08:38:41.372566 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.379559 master-0 kubenswrapper[7476]: I0320 08:38:41.379376 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.380438 master-0 kubenswrapper[7476]: I0320 08:38:41.379877 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-machine-approver-tls\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.414436 master-0 kubenswrapper[7476]: I0320 08:38:41.412301 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg"] Mar 20 08:38:41.414436 master-0 kubenswrapper[7476]: I0320 08:38:41.413260 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.419058 master-0 kubenswrapper[7476]: I0320 08:38:41.418567 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-mqln7" Mar 20 08:38:41.419058 master-0 kubenswrapper[7476]: I0320 08:38:41.418739 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 08:38:41.419058 master-0 kubenswrapper[7476]: I0320 08:38:41.418863 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 08:38:41.419058 master-0 kubenswrapper[7476]: I0320 08:38:41.418985 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 08:38:41.419285 master-0 kubenswrapper[7476]: I0320 08:38:41.419085 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 08:38:41.426241 master-0 kubenswrapper[7476]: I0320 08:38:41.419391 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 08:38:41.426769 master-0 kubenswrapper[7476]: I0320 08:38:41.426736 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szz6r\" (UniqueName: \"kubernetes.io/projected/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-kube-api-access-szz6r\") pod \"machine-approver-6cb57bb5db-r425t\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.432305 master-0 kubenswrapper[7476]: I0320 08:38:41.430077 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-c7zf4"] Mar 20 08:38:41.432305 master-0 kubenswrapper[7476]: I0320 08:38:41.432200 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.434760 master-0 kubenswrapper[7476]: I0320 08:38:41.434729 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 20 08:38:41.434850 master-0 kubenswrapper[7476]: I0320 08:38:41.434764 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-zs2v5" Mar 20 08:38:41.434902 master-0 kubenswrapper[7476]: I0320 08:38:41.434851 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 20 08:38:41.434902 master-0 kubenswrapper[7476]: I0320 08:38:41.434879 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 20 08:38:41.435237 master-0 kubenswrapper[7476]: I0320 08:38:41.435205 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 20 08:38:41.436631 master-0 kubenswrapper[7476]: I0320 08:38:41.436599 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 20 08:38:41.467156 master-0 kubenswrapper[7476]: I0320 08:38:41.466368 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-c7zf4"] Mar 20 08:38:41.467354 master-0 kubenswrapper[7476]: I0320 08:38:41.467212 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg"] Mar 20 08:38:41.472048 master-0 kubenswrapper[7476]: I0320 08:38:41.472008 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79xs6\" (UniqueName: \"kubernetes.io/projected/704f15dd-f0d9-40a7-8918-15b7568a9df6-kube-api-access-79xs6\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.472125 master-0 kubenswrapper[7476]: I0320 08:38:41.472066 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/704f15dd-f0d9-40a7-8918-15b7568a9df6-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.472125 master-0 kubenswrapper[7476]: I0320 08:38:41.472104 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.472202 master-0 kubenswrapper[7476]: I0320 08:38:41.472131 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.472202 master-0 kubenswrapper[7476]: I0320 08:38:41.472161 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x82xz\" (UniqueName: \"kubernetes.io/projected/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-kube-api-access-x82xz\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.472202 master-0 kubenswrapper[7476]: I0320 08:38:41.472197 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.472322 master-0 kubenswrapper[7476]: I0320 08:38:41.472202 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/704f15dd-f0d9-40a7-8918-15b7568a9df6-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.472322 master-0 kubenswrapper[7476]: I0320 08:38:41.472225 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.472322 master-0 kubenswrapper[7476]: I0320 08:38:41.472250 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.472322 master-0 kubenswrapper[7476]: I0320 08:38:41.472303 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.472440 master-0 kubenswrapper[7476]: I0320 08:38:41.472334 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmb9v\" (UniqueName: \"kubernetes.io/projected/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-kube-api-access-hmb9v\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.472440 master-0 kubenswrapper[7476]: I0320 08:38:41.472368 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.472440 master-0 kubenswrapper[7476]: I0320 08:38:41.472393 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.472440 master-0 kubenswrapper[7476]: I0320 08:38:41.472416 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns97v\" (UniqueName: \"kubernetes.io/projected/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-kube-api-access-ns97v\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.472555 master-0 kubenswrapper[7476]: I0320 08:38:41.472440 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbnx\" (UniqueName: \"kubernetes.io/projected/56970553-2ac8-4cb5-a12a-b7c1e777c587-kube-api-access-zrbnx\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.472555 master-0 kubenswrapper[7476]: I0320 08:38:41.472490 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmph\" (UniqueName: \"kubernetes.io/projected/581a8be2-d16c-4fd8-b051-214bd60a2a91-kube-api-access-ssmph\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.472555 master-0 kubenswrapper[7476]: I0320 08:38:41.472518 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.472640 master-0 kubenswrapper[7476]: I0320 08:38:41.472558 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgffp\" (UniqueName: \"kubernetes.io/projected/80ddf0a4-e853-4de0-b540-81144dfdd31d-kube-api-access-pgffp\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.472640 master-0 kubenswrapper[7476]: I0320 08:38:41.472586 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.472640 master-0 kubenswrapper[7476]: I0320 08:38:41.472617 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/704f15dd-f0d9-40a7-8918-15b7568a9df6-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.473200 master-0 kubenswrapper[7476]: I0320 08:38:41.472866 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.473200 master-0 kubenswrapper[7476]: I0320 08:38:41.472912 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-images\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.475117 master-0 kubenswrapper[7476]: I0320 08:38:41.475088 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/704f15dd-f0d9-40a7-8918-15b7568a9df6-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.475212 master-0 kubenswrapper[7476]: I0320 08:38:41.475089 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.475655 master-0 kubenswrapper[7476]: I0320 08:38:41.475627 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.505314 master-0 kubenswrapper[7476]: I0320 08:38:41.505209 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79xs6\" (UniqueName: \"kubernetes.io/projected/704f15dd-f0d9-40a7-8918-15b7568a9df6-kube-api-access-79xs6\") pod \"cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.508402 master-0 kubenswrapper[7476]: I0320 08:38:41.508377 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmph\" (UniqueName: \"kubernetes.io/projected/581a8be2-d16c-4fd8-b051-214bd60a2a91-kube-api-access-ssmph\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.517171 master-0 kubenswrapper[7476]: I0320 08:38:41.517127 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns97v\" (UniqueName: \"kubernetes.io/projected/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-kube-api-access-ns97v\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.517980 master-0 kubenswrapper[7476]: I0320 08:38:41.517946 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbnx\" (UniqueName: \"kubernetes.io/projected/56970553-2ac8-4cb5-a12a-b7c1e777c587-kube-api-access-zrbnx\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.551372 master-0 kubenswrapper[7476]: I0320 08:38:41.540015 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:38:41.561569 master-0 kubenswrapper[7476]: I0320 08:38:41.561514 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:38:41.570122 master-0 kubenswrapper[7476]: W0320 08:38:41.570060 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbde65eb_24f2_47f2_bfcf_bfe3c68450bd.slice/crio-4376ae2020b101775c7b2a911d516a84b1826041ed1d56a3aeffca67bb528aec WatchSource:0}: Error finding container 4376ae2020b101775c7b2a911d516a84b1826041ed1d56a3aeffca67bb528aec: Status 404 returned error can't find the container with id 4376ae2020b101775c7b2a911d516a84b1826041ed1d56a3aeffca67bb528aec Mar 20 08:38:41.573294 master-0 kubenswrapper[7476]: I0320 08:38:41.573240 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.573546 master-0 kubenswrapper[7476]: I0320 08:38:41.573449 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.573546 master-0 kubenswrapper[7476]: I0320 08:38:41.573492 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-snapshots\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.573546 master-0 kubenswrapper[7476]: I0320 08:38:41.573510 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.573546 master-0 kubenswrapper[7476]: I0320 08:38:41.573543 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgffp\" (UniqueName: \"kubernetes.io/projected/80ddf0a4-e853-4de0-b540-81144dfdd31d-kube-api-access-pgffp\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.573709 master-0 kubenswrapper[7476]: I0320 08:38:41.573630 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.573709 master-0 kubenswrapper[7476]: I0320 08:38:41.573674 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.573709 master-0 kubenswrapper[7476]: I0320 08:38:41.573702 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.573820 master-0 kubenswrapper[7476]: I0320 08:38:41.573729 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.573820 master-0 kubenswrapper[7476]: I0320 08:38:41.573759 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82xz\" (UniqueName: \"kubernetes.io/projected/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-kube-api-access-x82xz\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.573820 master-0 kubenswrapper[7476]: I0320 08:38:41.573788 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8plf\" (UniqueName: \"kubernetes.io/projected/b6610936-e14a-4532-955c-ea1ee4222259-kube-api-access-v8plf\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.573820 master-0 kubenswrapper[7476]: I0320 08:38:41.573815 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9mbs\" (UniqueName: \"kubernetes.io/projected/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-kube-api-access-n9mbs\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.574014 master-0 kubenswrapper[7476]: I0320 08:38:41.573835 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.574014 master-0 kubenswrapper[7476]: I0320 08:38:41.573864 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.574014 master-0 kubenswrapper[7476]: I0320 08:38:41.573870 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.574014 master-0 kubenswrapper[7476]: I0320 08:38:41.573898 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.574014 master-0 kubenswrapper[7476]: I0320 08:38:41.573960 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.574014 master-0 kubenswrapper[7476]: I0320 08:38:41.573995 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmb9v\" (UniqueName: \"kubernetes.io/projected/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-kube-api-access-hmb9v\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.574236 master-0 kubenswrapper[7476]: I0320 08:38:41.574027 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.575336 master-0 kubenswrapper[7476]: I0320 08:38:41.575247 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.575559 master-0 kubenswrapper[7476]: I0320 08:38:41.575528 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.576115 master-0 kubenswrapper[7476]: W0320 08:38:41.576081 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod704f15dd_f0d9_40a7_8918_15b7568a9df6.slice/crio-b87c0cbc890628e849ef78e044896edb129ed90926645e74ee377de7d85abcd2 WatchSource:0}: Error finding container b87c0cbc890628e849ef78e044896edb129ed90926645e74ee377de7d85abcd2: Status 404 returned error can't find the container with id b87c0cbc890628e849ef78e044896edb129ed90926645e74ee377de7d85abcd2 Mar 20 08:38:41.577176 master-0 kubenswrapper[7476]: I0320 08:38:41.577141 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.581726 master-0 kubenswrapper[7476]: I0320 08:38:41.581705 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.581837 master-0 kubenswrapper[7476]: I0320 08:38:41.581711 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.590952 master-0 kubenswrapper[7476]: I0320 08:38:41.590906 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:38:41.613177 master-0 kubenswrapper[7476]: I0320 08:38:41.613133 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:38:41.619972 master-0 kubenswrapper[7476]: I0320 08:38:41.619921 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82xz\" (UniqueName: \"kubernetes.io/projected/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-kube-api-access-x82xz\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.620673 master-0 kubenswrapper[7476]: I0320 08:38:41.620641 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgffp\" (UniqueName: \"kubernetes.io/projected/80ddf0a4-e853-4de0-b540-81144dfdd31d-kube-api-access-pgffp\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.625072 master-0 kubenswrapper[7476]: I0320 08:38:41.625035 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmb9v\" (UniqueName: \"kubernetes.io/projected/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-kube-api-access-hmb9v\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.635167 master-0 kubenswrapper[7476]: I0320 08:38:41.635120 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:38:41.658288 master-0 kubenswrapper[7476]: I0320 08:38:41.658221 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:38:41.675499 master-0 kubenswrapper[7476]: I0320 08:38:41.675410 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mbs\" (UniqueName: \"kubernetes.io/projected/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-kube-api-access-n9mbs\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.675499 master-0 kubenswrapper[7476]: I0320 08:38:41.675454 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.675499 master-0 kubenswrapper[7476]: I0320 08:38:41.675482 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.675499 master-0 kubenswrapper[7476]: I0320 08:38:41.675513 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.675850 master-0 kubenswrapper[7476]: I0320 08:38:41.675536 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-snapshots\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.675850 master-0 kubenswrapper[7476]: I0320 08:38:41.675556 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.675850 master-0 kubenswrapper[7476]: I0320 08:38:41.675581 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.675850 master-0 kubenswrapper[7476]: I0320 08:38:41.675612 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.675850 master-0 kubenswrapper[7476]: I0320 08:38:41.675630 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8plf\" (UniqueName: \"kubernetes.io/projected/b6610936-e14a-4532-955c-ea1ee4222259-kube-api-access-v8plf\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.676994 master-0 kubenswrapper[7476]: I0320 08:38:41.676929 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-snapshots\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.677465 master-0 kubenswrapper[7476]: I0320 08:38:41.677436 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.677845 master-0 kubenswrapper[7476]: I0320 08:38:41.677821 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.677964 master-0 kubenswrapper[7476]: I0320 08:38:41.677934 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.678223 master-0 kubenswrapper[7476]: I0320 08:38:41.678199 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.682293 master-0 kubenswrapper[7476]: I0320 08:38:41.682221 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.685643 master-0 kubenswrapper[7476]: I0320 08:38:41.685606 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.704905 master-0 kubenswrapper[7476]: I0320 08:38:41.704860 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mbs\" (UniqueName: \"kubernetes.io/projected/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-kube-api-access-n9mbs\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:41.705191 master-0 kubenswrapper[7476]: I0320 08:38:41.705152 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8plf\" (UniqueName: \"kubernetes.io/projected/b6610936-e14a-4532-955c-ea1ee4222259-kube-api-access-v8plf\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.728522 master-0 kubenswrapper[7476]: I0320 08:38:41.727945 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:38:41.775427 master-0 kubenswrapper[7476]: I0320 08:38:41.775392 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:38:41.813591 master-0 kubenswrapper[7476]: I0320 08:38:41.813552 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:38:41.827763 master-0 kubenswrapper[7476]: I0320 08:38:41.827706 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:38:42.331729 master-0 kubenswrapper[7476]: I0320 08:38:42.327519 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerStarted","Data":"b87c0cbc890628e849ef78e044896edb129ed90926645e74ee377de7d85abcd2"} Mar 20 08:38:42.354049 master-0 kubenswrapper[7476]: I0320 08:38:42.353971 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9"] Mar 20 08:38:42.359725 master-0 kubenswrapper[7476]: I0320 08:38:42.359650 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" event={"ID":"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd","Type":"ContainerStarted","Data":"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1"} Mar 20 08:38:42.359801 master-0 kubenswrapper[7476]: I0320 08:38:42.359729 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" event={"ID":"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd","Type":"ContainerStarted","Data":"4376ae2020b101775c7b2a911d516a84b1826041ed1d56a3aeffca67bb528aec"} Mar 20 08:38:42.366187 master-0 kubenswrapper[7476]: I0320 08:38:42.366152 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc"] Mar 20 08:38:42.377431 master-0 kubenswrapper[7476]: I0320 08:38:42.377364 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl"] Mar 20 08:38:42.391963 master-0 kubenswrapper[7476]: I0320 08:38:42.391487 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg"] Mar 20 08:38:42.419673 master-0 kubenswrapper[7476]: I0320 08:38:42.418503 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb"] Mar 20 08:38:42.422475 master-0 kubenswrapper[7476]: I0320 08:38:42.422440 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm"] Mar 20 08:38:42.481574 master-0 kubenswrapper[7476]: I0320 08:38:42.480170 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6"] Mar 20 08:38:42.482440 master-0 kubenswrapper[7476]: I0320 08:38:42.482338 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-68bf6ff9d6-c7zf4"] Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.354377 7476 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.354514 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: E0320 08:38:43.354898 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.354925 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: E0320 08:38:43.354944 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.354953 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: E0320 08:38:43.354972 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.354981 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: E0320 08:38:43.354994 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.355003 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.355160 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.355174 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.355198 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.355997 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.357572 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.358414 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="cluster-policy-controller" containerID="cri-o://cfd277b4fa13917f4d0cc04f7d6bdc6ea5d4df628ab0e4b86103cf26da62a23f" gracePeriod=30 Mar 20 08:38:43.359030 master-0 kubenswrapper[7476]: I0320 08:38:43.358887 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="46f265536aba6292ead501bc9b49f327" containerName="kube-controller-manager" containerID="cri-o://1599b02e47e2ea84fbce4395522bc8e26c32b95a49f745d9bd324ecad71aaa11" gracePeriod=30 Mar 20 08:38:43.432296 master-0 kubenswrapper[7476]: I0320 08:38:43.428818 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.432296 master-0 kubenswrapper[7476]: I0320 08:38:43.428869 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.457490 master-0 kubenswrapper[7476]: I0320 08:38:43.454090 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerStarted","Data":"c390ada5286d8adbcd2f8c4da2b3fb1c764bd2a56eb30ce5a1fc2fc1a428f30e"} Mar 20 08:38:43.457490 master-0 kubenswrapper[7476]: I0320 08:38:43.456750 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" event={"ID":"56970553-2ac8-4cb5-a12a-b7c1e777c587","Type":"ContainerStarted","Data":"8c5a039db74fb9e788a5aa01defc8a1f9fd1088c2644177e24de4994f3a27cd3"} Mar 20 08:38:43.460817 master-0 kubenswrapper[7476]: I0320 08:38:43.457985 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" event={"ID":"6d62448d-55f1-4bdc-85aa-09e7bdf766cc","Type":"ContainerStarted","Data":"e752098827604ca63ef6b84cdd36804c65e5654f7ec3055912844eb8b6ef68db"} Mar 20 08:38:43.464372 master-0 kubenswrapper[7476]: I0320 08:38:43.462827 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" event={"ID":"b6610936-e14a-4532-955c-ea1ee4222259","Type":"ContainerStarted","Data":"b4ed4cc8ebcde50bb6c3f1d2a2733df8bc54de93fcabc1096f2cb5082755e2e7"} Mar 20 08:38:43.464372 master-0 kubenswrapper[7476]: I0320 08:38:43.462860 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" event={"ID":"b6610936-e14a-4532-955c-ea1ee4222259","Type":"ContainerStarted","Data":"00f99a8d9909bbe9478bce6dc5de763850cd760e9c80771cbc6b2cedb9160c52"} Mar 20 08:38:43.464372 master-0 kubenswrapper[7476]: I0320 08:38:43.462869 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" event={"ID":"b6610936-e14a-4532-955c-ea1ee4222259","Type":"ContainerStarted","Data":"b36d4f6b43dcaa09ca3c55b7c20167210b34481854d09dfefb8adca147e001f9"} Mar 20 08:38:43.469469 master-0 kubenswrapper[7476]: I0320 08:38:43.467682 7476 generic.go:334] "Generic (PLEG): container finished" podID="4f6c819a-5074-4d29-84c8-e187528ad757" containerID="cf84a262e3cc737c426a3ee34816aa6cd8e8defa929f970e838849ec973bd55a" exitCode=0 Mar 20 08:38:43.473921 master-0 kubenswrapper[7476]: I0320 08:38:43.467888 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerDied","Data":"cf84a262e3cc737c426a3ee34816aa6cd8e8defa929f970e838849ec973bd55a"} Mar 20 08:38:43.480992 master-0 kubenswrapper[7476]: I0320 08:38:43.480951 7476 generic.go:334] "Generic (PLEG): container finished" podID="64d09f81-5fb6-462a-a736-5649779a6b1a" containerID="1e83bbe7ff1cdd771e7b861105c79c9f038ba7c1e62e6423e1143134dfc130c3" exitCode=0 Mar 20 08:38:43.481089 master-0 kubenswrapper[7476]: I0320 08:38:43.481043 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerDied","Data":"1e83bbe7ff1cdd771e7b861105c79c9f038ba7c1e62e6423e1143134dfc130c3"} Mar 20 08:38:43.487045 master-0 kubenswrapper[7476]: I0320 08:38:43.486372 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:38:43.493078 master-0 kubenswrapper[7476]: I0320 08:38:43.493023 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"aaee608b72e4a4a07804432864d818368bf78848923bc47049bb495de57ed536"} Mar 20 08:38:43.493078 master-0 kubenswrapper[7476]: I0320 08:38:43.493077 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"32d9278f90869a47d37ec354771e3c987fb65e24d65a9e7aa9b31e8b1fade86f"} Mar 20 08:38:43.494234 master-0 kubenswrapper[7476]: I0320 08:38:43.494206 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerStarted","Data":"5baf379ef595e5427aa5f7376ffa996583f39c05c81ca9fe28df973ed2c426be"} Mar 20 08:38:43.495440 master-0 kubenswrapper[7476]: I0320 08:38:43.495406 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerStarted","Data":"66f60747a10071044a32fdd3eb286bdb47b644ac36047fe8a2be062c88967367"} Mar 20 08:38:43.498009 master-0 kubenswrapper[7476]: I0320 08:38:43.497973 7476 generic.go:334] "Generic (PLEG): container finished" podID="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" containerID="0f65346a38596f758067a95721b4b8d598991f6450f547c5688592057337ba23" exitCode=0 Mar 20 08:38:43.498208 master-0 kubenswrapper[7476]: I0320 08:38:43.498019 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerDied","Data":"0f65346a38596f758067a95721b4b8d598991f6450f547c5688592057337ba23"} Mar 20 08:38:43.503422 master-0 kubenswrapper[7476]: I0320 08:38:43.503366 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" event={"ID":"581a8be2-d16c-4fd8-b051-214bd60a2a91","Type":"ContainerStarted","Data":"cbf54fbc4b42acb493a316042d264353dbe32db5138e1dcba4a3aa56fcc561e7"} Mar 20 08:38:43.503484 master-0 kubenswrapper[7476]: I0320 08:38:43.503434 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" event={"ID":"581a8be2-d16c-4fd8-b051-214bd60a2a91","Type":"ContainerStarted","Data":"bce60995e913b204c4470a4a4b36d406c096a66e95b110179e1a1c0fbcc39e0a"} Mar 20 08:38:43.517034 master-0 kubenswrapper[7476]: I0320 08:38:43.516980 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"e1023ad8b9dfcd1efdaef7585b1ccb0926083452bae0127b8861f9fbe05f41e3"} Mar 20 08:38:43.517034 master-0 kubenswrapper[7476]: I0320 08:38:43.517030 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"5a96373b7ec998e4c12966e11a5d5e48263b669f4268036f6aff8f1f1199dfa5"} Mar 20 08:38:43.530051 master-0 kubenswrapper[7476]: I0320 08:38:43.530021 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.530237 master-0 kubenswrapper[7476]: I0320 08:38:43.530061 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.532379 master-0 kubenswrapper[7476]: I0320 08:38:43.531572 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.532379 master-0 kubenswrapper[7476]: I0320 08:38:43.531634 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:43.532379 master-0 kubenswrapper[7476]: I0320 08:38:43.530825 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" podStartSLOduration=2.530798476 podStartE2EDuration="2.530798476s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:38:43.523185052 +0000 UTC m=+204.491953578" watchObservedRunningTime="2026-03-20 08:38:43.530798476 +0000 UTC m=+204.499567002" Mar 20 08:38:43.594171 master-0 kubenswrapper[7476]: I0320 08:38:43.594050 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:43.617483 master-0 kubenswrapper[7476]: I0320 08:38:43.617413 7476 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="611da918-616e-4414-bcd8-a78a83f69ed1" Mar 20 08:38:43.631957 master-0 kubenswrapper[7476]: I0320 08:38:43.631850 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 20 08:38:43.631957 master-0 kubenswrapper[7476]: I0320 08:38:43.631938 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 20 08:38:43.632145 master-0 kubenswrapper[7476]: I0320 08:38:43.632009 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 20 08:38:43.632145 master-0 kubenswrapper[7476]: I0320 08:38:43.632084 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 20 08:38:43.632145 master-0 kubenswrapper[7476]: I0320 08:38:43.632097 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:43.632145 master-0 kubenswrapper[7476]: I0320 08:38:43.632114 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") pod \"46f265536aba6292ead501bc9b49f327\" (UID: \"46f265536aba6292ead501bc9b49f327\") " Mar 20 08:38:43.632280 master-0 kubenswrapper[7476]: I0320 08:38:43.632155 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets" (OuterVolumeSpecName: "secrets") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:43.632280 master-0 kubenswrapper[7476]: I0320 08:38:43.632186 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:43.632280 master-0 kubenswrapper[7476]: I0320 08:38:43.632208 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config" (OuterVolumeSpecName: "config") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:43.632376 master-0 kubenswrapper[7476]: I0320 08:38:43.632244 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs" (OuterVolumeSpecName: "logs") pod "46f265536aba6292ead501bc9b49f327" (UID: "46f265536aba6292ead501bc9b49f327"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:43.633183 master-0 kubenswrapper[7476]: I0320 08:38:43.633145 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:43.633341 master-0 kubenswrapper[7476]: I0320 08:38:43.633218 7476 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:43.633341 master-0 kubenswrapper[7476]: I0320 08:38:43.633278 7476 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-secrets\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:43.633341 master-0 kubenswrapper[7476]: I0320 08:38:43.633291 7476 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-logs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:43.633341 master-0 kubenswrapper[7476]: I0320 08:38:43.633309 7476 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/46f265536aba6292ead501bc9b49f327-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:43.777938 master-0 kubenswrapper[7476]: I0320 08:38:43.777873 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:38:44.529673 master-0 kubenswrapper[7476]: I0320 08:38:44.529560 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerStarted","Data":"778dc6f6c0022cc9b874b3077c1b8afb784cc22d5931163c42ddb6f97b21e827"} Mar 20 08:38:44.534050 master-0 kubenswrapper[7476]: I0320 08:38:44.534003 7476 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="1599b02e47e2ea84fbce4395522bc8e26c32b95a49f745d9bd324ecad71aaa11" exitCode=0 Mar 20 08:38:44.534050 master-0 kubenswrapper[7476]: I0320 08:38:44.534026 7476 generic.go:334] "Generic (PLEG): container finished" podID="46f265536aba6292ead501bc9b49f327" containerID="cfd277b4fa13917f4d0cc04f7d6bdc6ea5d4df628ab0e4b86103cf26da62a23f" exitCode=0 Mar 20 08:38:44.534218 master-0 kubenswrapper[7476]: I0320 08:38:44.534051 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cc2b302ee7f4974624b7ec258eb40f2f3ce6fad71036a03b3d4361e0bca7e50" Mar 20 08:38:44.534218 master-0 kubenswrapper[7476]: I0320 08:38:44.534072 7476 scope.go:117] "RemoveContainer" containerID="cecb02a386370afc24432a3402093f12ea9e53ed8df8b02259c918fbec5ca271" Mar 20 08:38:44.534218 master-0 kubenswrapper[7476]: I0320 08:38:44.534075 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 20 08:38:44.537094 master-0 kubenswrapper[7476]: I0320 08:38:44.537062 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerStarted","Data":"a5afc30410e3001b1b13acca3a83e98ed554d83f6974859bb66e210875cbc977"} Mar 20 08:38:44.539025 master-0 kubenswrapper[7476]: I0320 08:38:44.538994 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"d5209b2ed23676968405f84f9fdd80496d17987f9448303167bee1c204c5000c"} Mar 20 08:38:44.539025 master-0 kubenswrapper[7476]: I0320 08:38:44.539023 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e"} Mar 20 08:38:44.539134 master-0 kubenswrapper[7476]: I0320 08:38:44.539035 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"7c09f65ab934e524195086a580c2b9eb85f8f4d50711b33b3da17693c9ad9000"} Mar 20 08:38:44.540669 master-0 kubenswrapper[7476]: I0320 08:38:44.540637 7476 generic.go:334] "Generic (PLEG): container finished" podID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerID="5e7daf3466466f866a8a609c3357214ad22e67b72e11f87494389948c897e7d2" exitCode=0 Mar 20 08:38:44.540721 master-0 kubenswrapper[7476]: I0320 08:38:44.540680 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3ea52b89-46f9-4685-aecd-162ba92baaf5","Type":"ContainerDied","Data":"5e7daf3466466f866a8a609c3357214ad22e67b72e11f87494389948c897e7d2"} Mar 20 08:38:44.543060 master-0 kubenswrapper[7476]: I0320 08:38:44.542987 7476 generic.go:334] "Generic (PLEG): container finished" podID="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" containerID="c390ada5286d8adbcd2f8c4da2b3fb1c764bd2a56eb30ce5a1fc2fc1a428f30e" exitCode=0 Mar 20 08:38:44.543060 master-0 kubenswrapper[7476]: I0320 08:38:44.543016 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerDied","Data":"c390ada5286d8adbcd2f8c4da2b3fb1c764bd2a56eb30ce5a1fc2fc1a428f30e"} Mar 20 08:38:44.553670 master-0 kubenswrapper[7476]: I0320 08:38:44.553584 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-chfj7" podStartSLOduration=24.808550274 podStartE2EDuration="27.553567802s" podCreationTimestamp="2026-03-20 08:38:17 +0000 UTC" firstStartedPulling="2026-03-20 08:38:41.302078325 +0000 UTC m=+202.270846851" lastFinishedPulling="2026-03-20 08:38:44.047095853 +0000 UTC m=+205.015864379" observedRunningTime="2026-03-20 08:38:44.550829795 +0000 UTC m=+205.519598321" watchObservedRunningTime="2026-03-20 08:38:44.553567802 +0000 UTC m=+205.522336328" Mar 20 08:38:44.595360 master-0 kubenswrapper[7476]: I0320 08:38:44.595303 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-clrp2" podStartSLOduration=16.926225777 podStartE2EDuration="19.595281317s" podCreationTimestamp="2026-03-20 08:38:25 +0000 UTC" firstStartedPulling="2026-03-20 08:38:41.304151772 +0000 UTC m=+202.272920288" lastFinishedPulling="2026-03-20 08:38:43.973207302 +0000 UTC m=+204.941975828" observedRunningTime="2026-03-20 08:38:44.593310012 +0000 UTC m=+205.562078548" watchObservedRunningTime="2026-03-20 08:38:44.595281317 +0000 UTC m=+205.564049843" Mar 20 08:38:45.244588 master-0 kubenswrapper[7476]: I0320 08:38:45.244344 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46f265536aba6292ead501bc9b49f327" path="/var/lib/kubelet/pods/46f265536aba6292ead501bc9b49f327/volumes" Mar 20 08:38:45.244774 master-0 kubenswrapper[7476]: I0320 08:38:45.244667 7476 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 20 08:38:45.260406 master-0 kubenswrapper[7476]: I0320 08:38:45.259801 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 20 08:38:45.260406 master-0 kubenswrapper[7476]: I0320 08:38:45.259836 7476 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="611da918-616e-4414-bcd8-a78a83f69ed1" Mar 20 08:38:45.262978 master-0 kubenswrapper[7476]: I0320 08:38:45.262944 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 20 08:38:45.263043 master-0 kubenswrapper[7476]: I0320 08:38:45.262982 7476 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="611da918-616e-4414-bcd8-a78a83f69ed1" Mar 20 08:38:49.664436 master-0 kubenswrapper[7476]: I0320 08:38:49.664378 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:49.664436 master-0 kubenswrapper[7476]: I0320 08:38:49.664436 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:49.712501 master-0 kubenswrapper[7476]: I0320 08:38:49.712430 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:49.712788 master-0 kubenswrapper[7476]: I0320 08:38:49.712700 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:49.725178 master-0 kubenswrapper[7476]: I0320 08:38:49.725141 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:49.763041 master-0 kubenswrapper[7476]: I0320 08:38:49.762993 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:50.223113 master-0 kubenswrapper[7476]: I0320 08:38:50.223059 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:38:50.241610 master-0 kubenswrapper[7476]: I0320 08:38:50.241521 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-var-lock\") pod \"3ea52b89-46f9-4685-aecd-162ba92baaf5\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " Mar 20 08:38:50.242440 master-0 kubenswrapper[7476]: I0320 08:38:50.242404 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea52b89-46f9-4685-aecd-162ba92baaf5-kube-api-access\") pod \"3ea52b89-46f9-4685-aecd-162ba92baaf5\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " Mar 20 08:38:50.242516 master-0 kubenswrapper[7476]: I0320 08:38:50.242471 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-kubelet-dir\") pod \"3ea52b89-46f9-4685-aecd-162ba92baaf5\" (UID: \"3ea52b89-46f9-4685-aecd-162ba92baaf5\") " Mar 20 08:38:50.242872 master-0 kubenswrapper[7476]: I0320 08:38:50.241889 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-var-lock" (OuterVolumeSpecName: "var-lock") pod "3ea52b89-46f9-4685-aecd-162ba92baaf5" (UID: "3ea52b89-46f9-4685-aecd-162ba92baaf5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:50.242933 master-0 kubenswrapper[7476]: I0320 08:38:50.242837 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ea52b89-46f9-4685-aecd-162ba92baaf5" (UID: "3ea52b89-46f9-4685-aecd-162ba92baaf5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:38:50.247829 master-0 kubenswrapper[7476]: I0320 08:38:50.247777 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea52b89-46f9-4685-aecd-162ba92baaf5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ea52b89-46f9-4685-aecd-162ba92baaf5" (UID: "3ea52b89-46f9-4685-aecd-162ba92baaf5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:50.343944 master-0 kubenswrapper[7476]: I0320 08:38:50.343884 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ea52b89-46f9-4685-aecd-162ba92baaf5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:50.343944 master-0 kubenswrapper[7476]: I0320 08:38:50.343925 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:50.343944 master-0 kubenswrapper[7476]: I0320 08:38:50.343938 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/3ea52b89-46f9-4685-aecd-162ba92baaf5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:38:50.593663 master-0 kubenswrapper[7476]: I0320 08:38:50.593610 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3ea52b89-46f9-4685-aecd-162ba92baaf5","Type":"ContainerDied","Data":"97ecc9dbe142a6967704accd994983e2161bceb749ddbf66e1756c81c1a78964"} Mar 20 08:38:50.593663 master-0 kubenswrapper[7476]: I0320 08:38:50.593661 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ecc9dbe142a6967704accd994983e2161bceb749ddbf66e1756c81c1a78964" Mar 20 08:38:50.593917 master-0 kubenswrapper[7476]: I0320 08:38:50.593895 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:38:50.641709 master-0 kubenswrapper[7476]: I0320 08:38:50.641632 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:38:50.654542 master-0 kubenswrapper[7476]: I0320 08:38:50.654498 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:38:58.655698 master-0 kubenswrapper[7476]: I0320 08:38:58.655653 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" event={"ID":"581a8be2-d16c-4fd8-b051-214bd60a2a91","Type":"ContainerStarted","Data":"f6be40a3faedb0919061fcd476f3dc16b4c5b58871784ce038ebfa438e16e89b"} Mar 20 08:38:58.675246 master-0 kubenswrapper[7476]: I0320 08:38:58.675197 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerStarted","Data":"4a40e6321221f3cde043850333e4ac8f894dd11fc7405d427009dd08b18900f6"} Mar 20 08:38:58.693090 master-0 kubenswrapper[7476]: I0320 08:38:58.692355 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249"} Mar 20 08:38:58.702508 master-0 kubenswrapper[7476]: I0320 08:38:58.701714 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerStarted","Data":"ef990221492ae46a8cd1a26b64819364b3aa46187fde095a3bf3a78349aaa22f"} Mar 20 08:38:58.707621 master-0 kubenswrapper[7476]: I0320 08:38:58.706201 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" podStartSLOduration=2.395580069 podStartE2EDuration="17.706178741s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.653071123 +0000 UTC m=+203.621839649" lastFinishedPulling="2026-03-20 08:38:57.963669765 +0000 UTC m=+218.932438321" observedRunningTime="2026-03-20 08:38:58.702972381 +0000 UTC m=+219.671740907" watchObservedRunningTime="2026-03-20 08:38:58.706178741 +0000 UTC m=+219.674947277" Mar 20 08:38:58.724678 master-0 kubenswrapper[7476]: I0320 08:38:58.723950 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerStarted","Data":"a7ec1ed13e0a355d823b781b053862cbbd8f7a00b211a40b600daee7dc545186"} Mar 20 08:38:58.731694 master-0 kubenswrapper[7476]: I0320 08:38:58.730999 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerStarted","Data":"3033684921b500c0cdc5a887bfabd7fc5e3c9f8cea2dfed120b0981d20756634"} Mar 20 08:38:58.733766 master-0 kubenswrapper[7476]: I0320 08:38:58.733722 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" event={"ID":"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd","Type":"ContainerStarted","Data":"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa"} Mar 20 08:38:58.736686 master-0 kubenswrapper[7476]: I0320 08:38:58.736072 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"462a8070d6a4a84bd5f75252bfbebd1aeca669c870c803cc819af47a7fc47625"} Mar 20 08:38:58.739388 master-0 kubenswrapper[7476]: I0320 08:38:58.739305 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"7c71ba6860012685e763d6be0a28f9f4eedf51541e431293b43883fadda65c94"} Mar 20 08:38:58.741403 master-0 kubenswrapper[7476]: I0320 08:38:58.741349 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerStarted","Data":"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb"} Mar 20 08:38:58.743898 master-0 kubenswrapper[7476]: I0320 08:38:58.743311 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" event={"ID":"56970553-2ac8-4cb5-a12a-b7c1e777c587","Type":"ContainerStarted","Data":"aefb7263e405401dfc5bdd3b4b914906cd92422736de11f54dfdd5ed1b7c6555"} Mar 20 08:38:58.743898 master-0 kubenswrapper[7476]: I0320 08:38:58.743335 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" event={"ID":"56970553-2ac8-4cb5-a12a-b7c1e777c587","Type":"ContainerStarted","Data":"3de8fd7dba2a402c88acfef1b2fb538d0415318dc0e8061e2e031c469a39d9cd"} Mar 20 08:38:58.751868 master-0 kubenswrapper[7476]: I0320 08:38:58.751282 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" event={"ID":"6d62448d-55f1-4bdc-85aa-09e7bdf766cc","Type":"ContainerStarted","Data":"893d8918886f5436f953dcd40251d5f2e4dbf4607b1d7637a866a6322cb8f13d"} Mar 20 08:38:58.775957 master-0 kubenswrapper[7476]: I0320 08:38:58.775884 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=15.775860613 podStartE2EDuration="15.775860613s" podCreationTimestamp="2026-03-20 08:38:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:38:58.773902368 +0000 UTC m=+219.742670894" watchObservedRunningTime="2026-03-20 08:38:58.775860613 +0000 UTC m=+219.744629149" Mar 20 08:38:58.885612 master-0 kubenswrapper[7476]: I0320 08:38:58.885518 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hj5tl" podStartSLOduration=25.257371662 podStartE2EDuration="41.885488389s" podCreationTimestamp="2026-03-20 08:38:17 +0000 UTC" firstStartedPulling="2026-03-20 08:38:41.290642823 +0000 UTC m=+202.259411349" lastFinishedPulling="2026-03-20 08:38:57.91875954 +0000 UTC m=+218.887528076" observedRunningTime="2026-03-20 08:38:58.883980887 +0000 UTC m=+219.852749433" watchObservedRunningTime="2026-03-20 08:38:58.885488389 +0000 UTC m=+219.854256915" Mar 20 08:38:58.994146 master-0 kubenswrapper[7476]: I0320 08:38:58.994065 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bt7wn" podStartSLOduration=17.368191561 podStartE2EDuration="33.994048856s" podCreationTimestamp="2026-03-20 08:38:25 +0000 UTC" firstStartedPulling="2026-03-20 08:38:41.300443458 +0000 UTC m=+202.269211984" lastFinishedPulling="2026-03-20 08:38:57.926300713 +0000 UTC m=+218.895069279" observedRunningTime="2026-03-20 08:38:58.991835434 +0000 UTC m=+219.960603960" watchObservedRunningTime="2026-03-20 08:38:58.994048856 +0000 UTC m=+219.962817382" Mar 20 08:38:59.117446 master-0 kubenswrapper[7476]: I0320 08:38:59.117372 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" podStartSLOduration=2.095322835 podStartE2EDuration="18.117346118s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:41.839849216 +0000 UTC m=+202.808617742" lastFinishedPulling="2026-03-20 08:38:57.861872459 +0000 UTC m=+218.830641025" observedRunningTime="2026-03-20 08:38:59.113961883 +0000 UTC m=+220.082730409" watchObservedRunningTime="2026-03-20 08:38:59.117346118 +0000 UTC m=+220.086114644" Mar 20 08:38:59.143182 master-0 kubenswrapper[7476]: I0320 08:38:59.143096 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" podStartSLOduration=3.197524868 podStartE2EDuration="18.143063172s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.951845025 +0000 UTC m=+203.920613551" lastFinishedPulling="2026-03-20 08:38:57.897383309 +0000 UTC m=+218.866151855" observedRunningTime="2026-03-20 08:38:59.142050293 +0000 UTC m=+220.110818819" watchObservedRunningTime="2026-03-20 08:38:59.143063172 +0000 UTC m=+220.111831688" Mar 20 08:38:59.178636 master-0 kubenswrapper[7476]: I0320 08:38:59.178563 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" podStartSLOduration=2.757960772 podStartE2EDuration="18.178543361s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.502125133 +0000 UTC m=+203.470893659" lastFinishedPulling="2026-03-20 08:38:57.922707682 +0000 UTC m=+218.891476248" observedRunningTime="2026-03-20 08:38:59.1660707 +0000 UTC m=+220.134839236" watchObservedRunningTime="2026-03-20 08:38:59.178543361 +0000 UTC m=+220.147311887" Mar 20 08:38:59.189765 master-0 kubenswrapper[7476]: I0320 08:38:59.189695 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" podStartSLOduration=2.741314902 podStartE2EDuration="18.189677004s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.476781679 +0000 UTC m=+203.445550205" lastFinishedPulling="2026-03-20 08:38:57.925143741 +0000 UTC m=+218.893912307" observedRunningTime="2026-03-20 08:38:59.188656976 +0000 UTC m=+220.157425502" watchObservedRunningTime="2026-03-20 08:38:59.189677004 +0000 UTC m=+220.158445530" Mar 20 08:38:59.259741 master-0 kubenswrapper[7476]: I0320 08:38:59.259684 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" podStartSLOduration=2.844375565 podStartE2EDuration="18.259667095s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.507469763 +0000 UTC m=+203.476238289" lastFinishedPulling="2026-03-20 08:38:57.922761273 +0000 UTC m=+218.891529819" observedRunningTime="2026-03-20 08:38:59.226970954 +0000 UTC m=+220.195739480" watchObservedRunningTime="2026-03-20 08:38:59.259667095 +0000 UTC m=+220.228435621" Mar 20 08:38:59.262420 master-0 kubenswrapper[7476]: I0320 08:38:59.262391 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" podStartSLOduration=2.840431815 podStartE2EDuration="18.262383452s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.440036655 +0000 UTC m=+203.408805181" lastFinishedPulling="2026-03-20 08:38:57.861988282 +0000 UTC m=+218.830756818" observedRunningTime="2026-03-20 08:38:59.260623722 +0000 UTC m=+220.229392248" watchObservedRunningTime="2026-03-20 08:38:59.262383452 +0000 UTC m=+220.231151988" Mar 20 08:38:59.283829 master-0 kubenswrapper[7476]: I0320 08:38:59.283691 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" podStartSLOduration=2.918692798 podStartE2EDuration="18.283676471s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:42.674302461 +0000 UTC m=+203.643070987" lastFinishedPulling="2026-03-20 08:38:58.039286124 +0000 UTC m=+219.008054660" observedRunningTime="2026-03-20 08:38:59.282862128 +0000 UTC m=+220.251630654" watchObservedRunningTime="2026-03-20 08:38:59.283676471 +0000 UTC m=+220.252444997" Mar 20 08:38:59.693822 master-0 kubenswrapper[7476]: I0320 08:38:59.691710 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:59.693822 master-0 kubenswrapper[7476]: I0320 08:38:59.691885 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:38:59.730178 master-0 kubenswrapper[7476]: I0320 08:38:59.729868 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:59.730178 master-0 kubenswrapper[7476]: I0320 08:38:59.729923 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:38:59.774288 master-0 kubenswrapper[7476]: I0320 08:38:59.773724 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerStarted","Data":"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7"} Mar 20 08:38:59.774288 master-0 kubenswrapper[7476]: I0320 08:38:59.773793 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerStarted","Data":"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b"} Mar 20 08:38:59.777249 master-0 kubenswrapper[7476]: I0320 08:38:59.777203 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7"} Mar 20 08:39:00.743461 master-0 kubenswrapper[7476]: I0320 08:39:00.743355 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-hj5tl" podUID="64d09f81-5fb6-462a-a736-5649779a6b1a" containerName="registry-server" probeResult="failure" output=< Mar 20 08:39:00.743461 master-0 kubenswrapper[7476]: timeout: failed to connect service ":50051" within 1s Mar 20 08:39:00.743461 master-0 kubenswrapper[7476]: > Mar 20 08:39:00.789688 master-0 kubenswrapper[7476]: I0320 08:39:00.789598 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bt7wn" podUID="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" containerName="registry-server" probeResult="failure" output=< Mar 20 08:39:00.789688 master-0 kubenswrapper[7476]: timeout: failed to connect service ":50051" within 1s Mar 20 08:39:00.789688 master-0 kubenswrapper[7476]: > Mar 20 08:39:01.893493 master-0 kubenswrapper[7476]: I0320 08:39:01.893433 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" podStartSLOduration=4.554490563 podStartE2EDuration="20.89341783s" podCreationTimestamp="2026-03-20 08:38:41 +0000 UTC" firstStartedPulling="2026-03-20 08:38:41.58679593 +0000 UTC m=+202.555564456" lastFinishedPulling="2026-03-20 08:38:57.925723177 +0000 UTC m=+218.894491723" observedRunningTime="2026-03-20 08:39:00.595885257 +0000 UTC m=+221.564653823" watchObservedRunningTime="2026-03-20 08:39:01.89341783 +0000 UTC m=+222.862186356" Mar 20 08:39:01.894864 master-0 kubenswrapper[7476]: I0320 08:39:01.894832 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2"] Mar 20 08:39:01.895055 master-0 kubenswrapper[7476]: I0320 08:39:01.895027 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="cluster-cloud-controller-manager" containerID="cri-o://1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" gracePeriod=30 Mar 20 08:39:01.895155 master-0 kubenswrapper[7476]: I0320 08:39:01.895088 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="kube-rbac-proxy" containerID="cri-o://e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" gracePeriod=30 Mar 20 08:39:01.895155 master-0 kubenswrapper[7476]: I0320 08:39:01.895140 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="config-sync-controllers" containerID="cri-o://b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" gracePeriod=30 Mar 20 08:39:01.899902 master-0 kubenswrapper[7476]: I0320 08:39:01.899863 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t"] Mar 20 08:39:01.900103 master-0 kubenswrapper[7476]: I0320 08:39:01.900048 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="kube-rbac-proxy" containerID="cri-o://39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1" gracePeriod=30 Mar 20 08:39:01.900316 master-0 kubenswrapper[7476]: I0320 08:39:01.900170 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="machine-approver-controller" containerID="cri-o://07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa" gracePeriod=30 Mar 20 08:39:01.916626 master-0 kubenswrapper[7476]: I0320 08:39:01.916574 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-lxv4d"] Mar 20 08:39:01.916836 master-0 kubenswrapper[7476]: E0320 08:39:01.916823 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerName="installer" Mar 20 08:39:01.916871 master-0 kubenswrapper[7476]: I0320 08:39:01.916836 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerName="installer" Mar 20 08:39:01.916983 master-0 kubenswrapper[7476]: I0320 08:39:01.916959 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerName="installer" Mar 20 08:39:01.917668 master-0 kubenswrapper[7476]: I0320 08:39:01.917640 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:01.919659 master-0 kubenswrapper[7476]: I0320 08:39:01.919621 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 08:39:01.919716 master-0 kubenswrapper[7476]: I0320 08:39:01.919651 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-dl9qh" Mar 20 08:39:01.951484 master-0 kubenswrapper[7476]: I0320 08:39:01.951442 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-rootfs\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:01.951615 master-0 kubenswrapper[7476]: I0320 08:39:01.951513 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:01.951615 master-0 kubenswrapper[7476]: I0320 08:39:01.951559 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r22fm\" (UniqueName: \"kubernetes.io/projected/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-kube-api-access-r22fm\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:01.951615 master-0 kubenswrapper[7476]: I0320 08:39:01.951592 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.052893 master-0 kubenswrapper[7476]: I0320 08:39:02.052841 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-rootfs\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.053038 master-0 kubenswrapper[7476]: I0320 08:39:02.052907 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.053038 master-0 kubenswrapper[7476]: I0320 08:39:02.052935 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22fm\" (UniqueName: \"kubernetes.io/projected/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-kube-api-access-r22fm\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.053038 master-0 kubenswrapper[7476]: I0320 08:39:02.052953 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.053398 master-0 kubenswrapper[7476]: I0320 08:39:02.053330 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-rootfs\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.054081 master-0 kubenswrapper[7476]: I0320 08:39:02.054044 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.057311 master-0 kubenswrapper[7476]: I0320 08:39:02.057282 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.107693 master-0 kubenswrapper[7476]: I0320 08:39:02.107420 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:39:02.111709 master-0 kubenswrapper[7476]: I0320 08:39:02.111458 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:39:02.131115 master-0 kubenswrapper[7476]: I0320 08:39:02.129856 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22fm\" (UniqueName: \"kubernetes.io/projected/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-kube-api-access-r22fm\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.154112 master-0 kubenswrapper[7476]: I0320 08:39:02.154017 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szz6r\" (UniqueName: \"kubernetes.io/projected/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-kube-api-access-szz6r\") pod \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " Mar 20 08:39:02.154112 master-0 kubenswrapper[7476]: I0320 08:39:02.154064 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79xs6\" (UniqueName: \"kubernetes.io/projected/704f15dd-f0d9-40a7-8918-15b7568a9df6-kube-api-access-79xs6\") pod \"704f15dd-f0d9-40a7-8918-15b7568a9df6\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " Mar 20 08:39:02.154112 master-0 kubenswrapper[7476]: I0320 08:39:02.154084 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/704f15dd-f0d9-40a7-8918-15b7568a9df6-host-etc-kube\") pod \"704f15dd-f0d9-40a7-8918-15b7568a9df6\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " Mar 20 08:39:02.154112 master-0 kubenswrapper[7476]: I0320 08:39:02.154113 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-config\") pod \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " Mar 20 08:39:02.154366 master-0 kubenswrapper[7476]: I0320 08:39:02.154133 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-auth-proxy-config\") pod \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " Mar 20 08:39:02.154366 master-0 kubenswrapper[7476]: I0320 08:39:02.154157 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-machine-approver-tls\") pod \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\" (UID: \"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd\") " Mar 20 08:39:02.154366 master-0 kubenswrapper[7476]: I0320 08:39:02.154176 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-images\") pod \"704f15dd-f0d9-40a7-8918-15b7568a9df6\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " Mar 20 08:39:02.154366 master-0 kubenswrapper[7476]: I0320 08:39:02.154192 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/704f15dd-f0d9-40a7-8918-15b7568a9df6-cloud-controller-manager-operator-tls\") pod \"704f15dd-f0d9-40a7-8918-15b7568a9df6\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " Mar 20 08:39:02.154366 master-0 kubenswrapper[7476]: I0320 08:39:02.154217 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-auth-proxy-config\") pod \"704f15dd-f0d9-40a7-8918-15b7568a9df6\" (UID: \"704f15dd-f0d9-40a7-8918-15b7568a9df6\") " Mar 20 08:39:02.154977 master-0 kubenswrapper[7476]: I0320 08:39:02.154582 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/704f15dd-f0d9-40a7-8918-15b7568a9df6-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "704f15dd-f0d9-40a7-8918-15b7568a9df6" (UID: "704f15dd-f0d9-40a7-8918-15b7568a9df6"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:39:02.154977 master-0 kubenswrapper[7476]: I0320 08:39:02.154849 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-images" (OuterVolumeSpecName: "images") pod "704f15dd-f0d9-40a7-8918-15b7568a9df6" (UID: "704f15dd-f0d9-40a7-8918-15b7568a9df6"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:39:02.155064 master-0 kubenswrapper[7476]: I0320 08:39:02.155012 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "704f15dd-f0d9-40a7-8918-15b7568a9df6" (UID: "704f15dd-f0d9-40a7-8918-15b7568a9df6"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:39:02.155250 master-0 kubenswrapper[7476]: I0320 08:39:02.155190 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-config" (OuterVolumeSpecName: "config") pod "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" (UID: "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:39:02.155331 master-0 kubenswrapper[7476]: I0320 08:39:02.155308 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" (UID: "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:39:02.158259 master-0 kubenswrapper[7476]: I0320 08:39:02.158199 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/704f15dd-f0d9-40a7-8918-15b7568a9df6-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "704f15dd-f0d9-40a7-8918-15b7568a9df6" (UID: "704f15dd-f0d9-40a7-8918-15b7568a9df6"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:39:02.158259 master-0 kubenswrapper[7476]: I0320 08:39:02.158219 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-kube-api-access-szz6r" (OuterVolumeSpecName: "kube-api-access-szz6r") pod "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" (UID: "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd"). InnerVolumeSpecName "kube-api-access-szz6r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:39:02.158259 master-0 kubenswrapper[7476]: I0320 08:39:02.158227 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" (UID: "fbde65eb-24f2-47f2-bfcf-bfe3c68450bd"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:39:02.158391 master-0 kubenswrapper[7476]: I0320 08:39:02.158285 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/704f15dd-f0d9-40a7-8918-15b7568a9df6-kube-api-access-79xs6" (OuterVolumeSpecName: "kube-api-access-79xs6") pod "704f15dd-f0d9-40a7-8918-15b7568a9df6" (UID: "704f15dd-f0d9-40a7-8918-15b7568a9df6"). InnerVolumeSpecName "kube-api-access-79xs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:39:02.255676 master-0 kubenswrapper[7476]: I0320 08:39:02.255612 7476 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255676 master-0 kubenswrapper[7476]: I0320 08:39:02.255665 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szz6r\" (UniqueName: \"kubernetes.io/projected/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-kube-api-access-szz6r\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255676 master-0 kubenswrapper[7476]: I0320 08:39:02.255678 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79xs6\" (UniqueName: \"kubernetes.io/projected/704f15dd-f0d9-40a7-8918-15b7568a9df6-kube-api-access-79xs6\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255953 master-0 kubenswrapper[7476]: I0320 08:39:02.255691 7476 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/704f15dd-f0d9-40a7-8918-15b7568a9df6-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255953 master-0 kubenswrapper[7476]: I0320 08:39:02.255705 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255953 master-0 kubenswrapper[7476]: I0320 08:39:02.255718 7476 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255953 master-0 kubenswrapper[7476]: I0320 08:39:02.255730 7476 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255953 master-0 kubenswrapper[7476]: I0320 08:39:02.255741 7476 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/704f15dd-f0d9-40a7-8918-15b7568a9df6-images\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.255953 master-0 kubenswrapper[7476]: I0320 08:39:02.255753 7476 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/704f15dd-f0d9-40a7-8918-15b7568a9df6-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:39:02.404238 master-0 kubenswrapper[7476]: I0320 08:39:02.404123 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:39:02.835901 master-0 kubenswrapper[7476]: I0320 08:39:02.835851 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" event={"ID":"0f725c4a-234c-44e9-95f2-73f31d2b0fd3","Type":"ContainerStarted","Data":"26055e3a0db35a38b3a239692e9a4981d421d70eb75773637c0ded0f0062866a"} Mar 20 08:39:02.835901 master-0 kubenswrapper[7476]: I0320 08:39:02.835903 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" event={"ID":"0f725c4a-234c-44e9-95f2-73f31d2b0fd3","Type":"ContainerStarted","Data":"3be597dd6af294be5c2ca7f07c208566e2ea40f0cb53618a5e8df432f0f812a2"} Mar 20 08:39:02.836144 master-0 kubenswrapper[7476]: I0320 08:39:02.835917 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" event={"ID":"0f725c4a-234c-44e9-95f2-73f31d2b0fd3","Type":"ContainerStarted","Data":"7ed933ad5ab2402e750d28bcdcc40b75fc2d12d35fd030d2dca7b16f6da20585"} Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838651 7476 generic.go:334] "Generic (PLEG): container finished" podID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerID="e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" exitCode=0 Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838683 7476 generic.go:334] "Generic (PLEG): container finished" podID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerID="b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" exitCode=0 Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838692 7476 generic.go:334] "Generic (PLEG): container finished" podID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerID="1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" exitCode=0 Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838684 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerDied","Data":"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7"} Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838722 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerDied","Data":"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b"} Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838728 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838738 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerDied","Data":"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb"} Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838752 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2" event={"ID":"704f15dd-f0d9-40a7-8918-15b7568a9df6","Type":"ContainerDied","Data":"b87c0cbc890628e849ef78e044896edb129ed90926645e74ee377de7d85abcd2"} Mar 20 08:39:02.840395 master-0 kubenswrapper[7476]: I0320 08:39:02.838782 7476 scope.go:117] "RemoveContainer" containerID="e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" Mar 20 08:39:02.841001 master-0 kubenswrapper[7476]: I0320 08:39:02.840567 7476 generic.go:334] "Generic (PLEG): container finished" podID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerID="07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa" exitCode=0 Mar 20 08:39:02.841001 master-0 kubenswrapper[7476]: I0320 08:39:02.840595 7476 generic.go:334] "Generic (PLEG): container finished" podID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerID="39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1" exitCode=0 Mar 20 08:39:02.841001 master-0 kubenswrapper[7476]: I0320 08:39:02.840613 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" event={"ID":"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd","Type":"ContainerDied","Data":"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa"} Mar 20 08:39:02.841001 master-0 kubenswrapper[7476]: I0320 08:39:02.840633 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" event={"ID":"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd","Type":"ContainerDied","Data":"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1"} Mar 20 08:39:02.841001 master-0 kubenswrapper[7476]: I0320 08:39:02.840648 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" event={"ID":"fbde65eb-24f2-47f2-bfcf-bfe3c68450bd","Type":"ContainerDied","Data":"4376ae2020b101775c7b2a911d516a84b1826041ed1d56a3aeffca67bb528aec"} Mar 20 08:39:02.841001 master-0 kubenswrapper[7476]: I0320 08:39:02.840697 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t" Mar 20 08:39:02.866125 master-0 kubenswrapper[7476]: I0320 08:39:02.864343 7476 scope.go:117] "RemoveContainer" containerID="b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" Mar 20 08:39:02.882198 master-0 kubenswrapper[7476]: I0320 08:39:02.882172 7476 scope.go:117] "RemoveContainer" containerID="1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" Mar 20 08:39:02.923662 master-0 kubenswrapper[7476]: I0320 08:39:02.922946 7476 scope.go:117] "RemoveContainer" containerID="e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" Mar 20 08:39:02.923662 master-0 kubenswrapper[7476]: E0320 08:39:02.923543 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": container with ID starting with e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7 not found: ID does not exist" containerID="e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" Mar 20 08:39:02.923662 master-0 kubenswrapper[7476]: I0320 08:39:02.923590 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7"} err="failed to get container status \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": rpc error: code = NotFound desc = could not find container \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": container with ID starting with e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7 not found: ID does not exist" Mar 20 08:39:02.923662 master-0 kubenswrapper[7476]: I0320 08:39:02.923624 7476 scope.go:117] "RemoveContainer" containerID="b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" Mar 20 08:39:02.925287 master-0 kubenswrapper[7476]: E0320 08:39:02.924602 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": container with ID starting with b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b not found: ID does not exist" containerID="b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" Mar 20 08:39:02.925287 master-0 kubenswrapper[7476]: I0320 08:39:02.924634 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b"} err="failed to get container status \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": rpc error: code = NotFound desc = could not find container \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": container with ID starting with b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b not found: ID does not exist" Mar 20 08:39:02.925287 master-0 kubenswrapper[7476]: I0320 08:39:02.924657 7476 scope.go:117] "RemoveContainer" containerID="1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" Mar 20 08:39:02.925287 master-0 kubenswrapper[7476]: E0320 08:39:02.925060 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": container with ID starting with 1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb not found: ID does not exist" containerID="1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" Mar 20 08:39:02.925287 master-0 kubenswrapper[7476]: I0320 08:39:02.925097 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb"} err="failed to get container status \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": rpc error: code = NotFound desc = could not find container \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": container with ID starting with 1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb not found: ID does not exist" Mar 20 08:39:02.925287 master-0 kubenswrapper[7476]: I0320 08:39:02.925116 7476 scope.go:117] "RemoveContainer" containerID="e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" Mar 20 08:39:02.926494 master-0 kubenswrapper[7476]: I0320 08:39:02.925684 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7"} err="failed to get container status \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": rpc error: code = NotFound desc = could not find container \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": container with ID starting with e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7 not found: ID does not exist" Mar 20 08:39:02.926494 master-0 kubenswrapper[7476]: I0320 08:39:02.925821 7476 scope.go:117] "RemoveContainer" containerID="b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" Mar 20 08:39:02.926494 master-0 kubenswrapper[7476]: I0320 08:39:02.926218 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b"} err="failed to get container status \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": rpc error: code = NotFound desc = could not find container \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": container with ID starting with b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b not found: ID does not exist" Mar 20 08:39:02.926494 master-0 kubenswrapper[7476]: I0320 08:39:02.926245 7476 scope.go:117] "RemoveContainer" containerID="1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" Mar 20 08:39:02.930305 master-0 kubenswrapper[7476]: I0320 08:39:02.930140 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb"} err="failed to get container status \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": rpc error: code = NotFound desc = could not find container \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": container with ID starting with 1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb not found: ID does not exist" Mar 20 08:39:02.930305 master-0 kubenswrapper[7476]: I0320 08:39:02.930166 7476 scope.go:117] "RemoveContainer" containerID="e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7" Mar 20 08:39:02.931929 master-0 kubenswrapper[7476]: I0320 08:39:02.931547 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7"} err="failed to get container status \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": rpc error: code = NotFound desc = could not find container \"e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7\": container with ID starting with e60b918d890c41129f1a81ee1623e504fb3a7dadd07ab1d606a8a9325c7529b7 not found: ID does not exist" Mar 20 08:39:02.931929 master-0 kubenswrapper[7476]: I0320 08:39:02.931597 7476 scope.go:117] "RemoveContainer" containerID="b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b" Mar 20 08:39:02.932073 master-0 kubenswrapper[7476]: I0320 08:39:02.931980 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b"} err="failed to get container status \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": rpc error: code = NotFound desc = could not find container \"b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b\": container with ID starting with b0676a5552f6c04dcbaf77ec58e3d0b933863b0f9b93dbd148b79cb1a766784b not found: ID does not exist" Mar 20 08:39:02.932073 master-0 kubenswrapper[7476]: I0320 08:39:02.932028 7476 scope.go:117] "RemoveContainer" containerID="1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb" Mar 20 08:39:02.933990 master-0 kubenswrapper[7476]: I0320 08:39:02.932637 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb"} err="failed to get container status \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": rpc error: code = NotFound desc = could not find container \"1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb\": container with ID starting with 1e07a98f16c147dc4322627b1d8ef6ec25b20d01e46c26af995ef141c071b9eb not found: ID does not exist" Mar 20 08:39:02.933990 master-0 kubenswrapper[7476]: I0320 08:39:02.932717 7476 scope.go:117] "RemoveContainer" containerID="07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa" Mar 20 08:39:02.961575 master-0 kubenswrapper[7476]: I0320 08:39:02.961409 7476 scope.go:117] "RemoveContainer" containerID="39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1" Mar 20 08:39:02.988338 master-0 kubenswrapper[7476]: I0320 08:39:02.988257 7476 scope.go:117] "RemoveContainer" containerID="07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa" Mar 20 08:39:02.990177 master-0 kubenswrapper[7476]: E0320 08:39:02.990130 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa\": container with ID starting with 07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa not found: ID does not exist" containerID="07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa" Mar 20 08:39:02.990322 master-0 kubenswrapper[7476]: I0320 08:39:02.990187 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa"} err="failed to get container status \"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa\": rpc error: code = NotFound desc = could not find container \"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa\": container with ID starting with 07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa not found: ID does not exist" Mar 20 08:39:02.990322 master-0 kubenswrapper[7476]: I0320 08:39:02.990225 7476 scope.go:117] "RemoveContainer" containerID="39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1" Mar 20 08:39:02.990753 master-0 kubenswrapper[7476]: E0320 08:39:02.990701 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1\": container with ID starting with 39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1 not found: ID does not exist" containerID="39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1" Mar 20 08:39:02.990842 master-0 kubenswrapper[7476]: I0320 08:39:02.990756 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1"} err="failed to get container status \"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1\": rpc error: code = NotFound desc = could not find container \"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1\": container with ID starting with 39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1 not found: ID does not exist" Mar 20 08:39:02.990842 master-0 kubenswrapper[7476]: I0320 08:39:02.990791 7476 scope.go:117] "RemoveContainer" containerID="07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa" Mar 20 08:39:02.991183 master-0 kubenswrapper[7476]: I0320 08:39:02.991149 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa"} err="failed to get container status \"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa\": rpc error: code = NotFound desc = could not find container \"07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa\": container with ID starting with 07a078092aa2967a4c75af31b0c9f229d7822f256519dfbea6a70ee5b47ac7fa not found: ID does not exist" Mar 20 08:39:02.991183 master-0 kubenswrapper[7476]: I0320 08:39:02.991181 7476 scope.go:117] "RemoveContainer" containerID="39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1" Mar 20 08:39:02.992329 master-0 kubenswrapper[7476]: I0320 08:39:02.992230 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1"} err="failed to get container status \"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1\": rpc error: code = NotFound desc = could not find container \"39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1\": container with ID starting with 39e258e292b74677b425c4a3bfa8c610e1c417a6b0a5b3430c92eaab75a1d6c1 not found: ID does not exist" Mar 20 08:39:02.992738 master-0 kubenswrapper[7476]: I0320 08:39:02.992688 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" podStartSLOduration=1.992679141 podStartE2EDuration="1.992679141s" podCreationTimestamp="2026-03-20 08:39:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:02.943534697 +0000 UTC m=+223.912303223" watchObservedRunningTime="2026-03-20 08:39:02.992679141 +0000 UTC m=+223.961447667" Mar 20 08:39:02.993452 master-0 kubenswrapper[7476]: I0320 08:39:02.993422 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t"] Mar 20 08:39:03.042098 master-0 kubenswrapper[7476]: I0320 08:39:03.042012 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-6cb57bb5db-r425t"] Mar 20 08:39:03.088542 master-0 kubenswrapper[7476]: I0320 08:39:03.088484 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2"] Mar 20 08:39:03.106895 master-0 kubenswrapper[7476]: I0320 08:39:03.106837 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7559f7c68c-f6rv2"] Mar 20 08:39:03.118688 master-0 kubenswrapper[7476]: I0320 08:39:03.118624 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl"] Mar 20 08:39:03.118950 master-0 kubenswrapper[7476]: E0320 08:39:03.118915 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="kube-rbac-proxy" Mar 20 08:39:03.118950 master-0 kubenswrapper[7476]: I0320 08:39:03.118943 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="kube-rbac-proxy" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: E0320 08:39:03.118959 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="cluster-cloud-controller-manager" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: I0320 08:39:03.118969 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="cluster-cloud-controller-manager" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: E0320 08:39:03.118986 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="machine-approver-controller" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: I0320 08:39:03.118997 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="machine-approver-controller" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: E0320 08:39:03.119007 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="kube-rbac-proxy" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: I0320 08:39:03.119016 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="kube-rbac-proxy" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: E0320 08:39:03.119026 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="config-sync-controllers" Mar 20 08:39:03.119044 master-0 kubenswrapper[7476]: I0320 08:39:03.119034 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="config-sync-controllers" Mar 20 08:39:03.119344 master-0 kubenswrapper[7476]: I0320 08:39:03.119155 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="cluster-cloud-controller-manager" Mar 20 08:39:03.119344 master-0 kubenswrapper[7476]: I0320 08:39:03.119173 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="config-sync-controllers" Mar 20 08:39:03.119344 master-0 kubenswrapper[7476]: I0320 08:39:03.119186 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="kube-rbac-proxy" Mar 20 08:39:03.119344 master-0 kubenswrapper[7476]: I0320 08:39:03.119208 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" containerName="kube-rbac-proxy" Mar 20 08:39:03.119344 master-0 kubenswrapper[7476]: I0320 08:39:03.119224 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" containerName="machine-approver-controller" Mar 20 08:39:03.119965 master-0 kubenswrapper[7476]: I0320 08:39:03.119930 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.122679 master-0 kubenswrapper[7476]: I0320 08:39:03.122069 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 08:39:03.122679 master-0 kubenswrapper[7476]: I0320 08:39:03.122309 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 08:39:03.122679 master-0 kubenswrapper[7476]: I0320 08:39:03.122446 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 08:39:03.122679 master-0 kubenswrapper[7476]: I0320 08:39:03.122485 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 08:39:03.123853 master-0 kubenswrapper[7476]: I0320 08:39:03.123815 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 08:39:03.123938 master-0 kubenswrapper[7476]: I0320 08:39:03.123865 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-rqpg6" Mar 20 08:39:03.166027 master-0 kubenswrapper[7476]: I0320 08:39:03.165981 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.166106 master-0 kubenswrapper[7476]: I0320 08:39:03.166045 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlgd7\" (UniqueName: \"kubernetes.io/projected/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-kube-api-access-hlgd7\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.166106 master-0 kubenswrapper[7476]: I0320 08:39:03.166072 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.166106 master-0 kubenswrapper[7476]: I0320 08:39:03.166111 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.234780 master-0 kubenswrapper[7476]: I0320 08:39:03.234701 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n"] Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.235627 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.237279 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.237550 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.237878 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.238032 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.238164 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 20 08:39:03.238913 master-0 kubenswrapper[7476]: I0320 08:39:03.238709 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-8kh8p" Mar 20 08:39:03.243063 master-0 kubenswrapper[7476]: I0320 08:39:03.243016 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="704f15dd-f0d9-40a7-8918-15b7568a9df6" path="/var/lib/kubelet/pods/704f15dd-f0d9-40a7-8918-15b7568a9df6/volumes" Mar 20 08:39:03.244051 master-0 kubenswrapper[7476]: I0320 08:39:03.244017 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbde65eb-24f2-47f2-bfcf-bfe3c68450bd" path="/var/lib/kubelet/pods/fbde65eb-24f2-47f2-bfcf-bfe3c68450bd/volumes" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267317 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267358 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbzl9\" (UniqueName: \"kubernetes.io/projected/6163bd4b-dc83-4e83-8590-5ac4753bda1c-kube-api-access-zbzl9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267384 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267404 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267423 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267448 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6163bd4b-dc83-4e83-8590-5ac4753bda1c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267473 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgd7\" (UniqueName: \"kubernetes.io/projected/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-kube-api-access-hlgd7\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267492 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.267884 master-0 kubenswrapper[7476]: I0320 08:39:03.267524 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.268726 master-0 kubenswrapper[7476]: I0320 08:39:03.268711 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.269118 master-0 kubenswrapper[7476]: I0320 08:39:03.269072 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.272145 master-0 kubenswrapper[7476]: I0320 08:39:03.272130 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.304167 master-0 kubenswrapper[7476]: I0320 08:39:03.304105 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgd7\" (UniqueName: \"kubernetes.io/projected/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-kube-api-access-hlgd7\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.369462 master-0 kubenswrapper[7476]: I0320 08:39:03.369385 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.369733 master-0 kubenswrapper[7476]: I0320 08:39:03.369484 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzl9\" (UniqueName: \"kubernetes.io/projected/6163bd4b-dc83-4e83-8590-5ac4753bda1c-kube-api-access-zbzl9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.369733 master-0 kubenswrapper[7476]: I0320 08:39:03.369541 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.369733 master-0 kubenswrapper[7476]: I0320 08:39:03.369578 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.369733 master-0 kubenswrapper[7476]: I0320 08:39:03.369632 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6163bd4b-dc83-4e83-8590-5ac4753bda1c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.369917 master-0 kubenswrapper[7476]: I0320 08:39:03.369760 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6163bd4b-dc83-4e83-8590-5ac4753bda1c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.370535 master-0 kubenswrapper[7476]: I0320 08:39:03.370476 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.371256 master-0 kubenswrapper[7476]: I0320 08:39:03.371217 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.373779 master-0 kubenswrapper[7476]: I0320 08:39:03.373740 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.388102 master-0 kubenswrapper[7476]: I0320 08:39:03.388062 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzl9\" (UniqueName: \"kubernetes.io/projected/6163bd4b-dc83-4e83-8590-5ac4753bda1c-kube-api-access-zbzl9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.439964 master-0 kubenswrapper[7476]: I0320 08:39:03.439915 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:39:03.561464 master-0 kubenswrapper[7476]: I0320 08:39:03.561407 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:39:03.779209 master-0 kubenswrapper[7476]: I0320 08:39:03.778768 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.779209 master-0 kubenswrapper[7476]: I0320 08:39:03.778836 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.779209 master-0 kubenswrapper[7476]: I0320 08:39:03.778866 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.779209 master-0 kubenswrapper[7476]: I0320 08:39:03.778884 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.782232 master-0 kubenswrapper[7476]: I0320 08:39:03.781956 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.782232 master-0 kubenswrapper[7476]: I0320 08:39:03.782211 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.861924 master-0 kubenswrapper[7476]: I0320 08:39:03.861880 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"6018dc62d387a9b77f99180b9b59d3182e437f628eb7fce91bb3764fe4982ba6"} Mar 20 08:39:03.864437 master-0 kubenswrapper[7476]: I0320 08:39:03.864352 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"27b219397d2ded697fb8c63422f6fe333badb02574d5af0d32c7a5d157330ed0"} Mar 20 08:39:03.864505 master-0 kubenswrapper[7476]: I0320 08:39:03.864456 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"ccabd735cd283aaf872e4d4c6439fc21d25d047aca8d8580112cec5049c44ca7"} Mar 20 08:39:03.872304 master-0 kubenswrapper[7476]: I0320 08:39:03.872148 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:03.872304 master-0 kubenswrapper[7476]: I0320 08:39:03.872246 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:39:04.872142 master-0 kubenswrapper[7476]: I0320 08:39:04.872096 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"52016baf23be09eb560f695ee764aa3c366d61ff1792a482aac5922ed083323d"} Mar 20 08:39:04.872835 master-0 kubenswrapper[7476]: I0320 08:39:04.872811 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"f5f32b4aa675a44318fb714c2260768744660087826b3e03d8f23272cd36e48d"} Mar 20 08:39:04.872933 master-0 kubenswrapper[7476]: I0320 08:39:04.872916 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"8318704eaa08899e772deabe42128ea1b882f7234facbd87ca64f6d3f0952a1a"} Mar 20 08:39:04.873844 master-0 kubenswrapper[7476]: I0320 08:39:04.873804 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"45ed4d229b5e4ffda3ad9ee3a6c6c79dd79e664c69394337cb1c4fc4b2036f31"} Mar 20 08:39:04.897054 master-0 kubenswrapper[7476]: I0320 08:39:04.896987 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" podStartSLOduration=1.896971727 podStartE2EDuration="1.896971727s" podCreationTimestamp="2026-03-20 08:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:04.895356123 +0000 UTC m=+225.864124659" watchObservedRunningTime="2026-03-20 08:39:04.896971727 +0000 UTC m=+225.865740243" Mar 20 08:39:04.920053 master-0 kubenswrapper[7476]: I0320 08:39:04.919976 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" podStartSLOduration=1.919954285 podStartE2EDuration="1.919954285s" podCreationTimestamp="2026-03-20 08:39:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:04.917771444 +0000 UTC m=+225.886539970" watchObservedRunningTime="2026-03-20 08:39:04.919954285 +0000 UTC m=+225.888722821" Mar 20 08:39:09.185761 master-0 kubenswrapper[7476]: I0320 08:39:09.185679 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v"] Mar 20 08:39:09.186836 master-0 kubenswrapper[7476]: I0320 08:39:09.186791 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.193572 master-0 kubenswrapper[7476]: I0320 08:39:09.193487 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-54fh7" Mar 20 08:39:09.193842 master-0 kubenswrapper[7476]: I0320 08:39:09.193751 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 08:39:09.248673 master-0 kubenswrapper[7476]: I0320 08:39:09.248578 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.248942 master-0 kubenswrapper[7476]: I0320 08:39:09.248687 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.248942 master-0 kubenswrapper[7476]: I0320 08:39:09.248790 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf9kc\" (UniqueName: \"kubernetes.io/projected/f6a6e991-c861-48f5-bfde-78762a037343-kube-api-access-rf9kc\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.350036 master-0 kubenswrapper[7476]: I0320 08:39:09.349958 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9kc\" (UniqueName: \"kubernetes.io/projected/f6a6e991-c861-48f5-bfde-78762a037343-kube-api-access-rf9kc\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.350136 master-0 kubenswrapper[7476]: I0320 08:39:09.350075 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.350136 master-0 kubenswrapper[7476]: I0320 08:39:09.350128 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.351877 master-0 kubenswrapper[7476]: I0320 08:39:09.351825 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.355755 master-0 kubenswrapper[7476]: I0320 08:39:09.355698 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:09.804292 master-0 kubenswrapper[7476]: I0320 08:39:09.804229 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:39:09.805060 master-0 kubenswrapper[7476]: I0320 08:39:09.805000 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v"] Mar 20 08:39:09.828969 master-0 kubenswrapper[7476]: I0320 08:39:09.828889 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:39:09.854345 master-0 kubenswrapper[7476]: I0320 08:39:09.854291 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:39:09.874021 master-0 kubenswrapper[7476]: I0320 08:39:09.873990 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:39:10.571356 master-0 kubenswrapper[7476]: I0320 08:39:10.571197 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9kc\" (UniqueName: \"kubernetes.io/projected/f6a6e991-c861-48f5-bfde-78762a037343-kube-api-access-rf9kc\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:10.719301 master-0 kubenswrapper[7476]: I0320 08:39:10.717556 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-tdpfq_8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/authentication-operator/0.log" Mar 20 08:39:10.728417 master-0 kubenswrapper[7476]: I0320 08:39:10.727982 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:39:11.089079 master-0 kubenswrapper[7476]: I0320 08:39:11.088184 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-tdpfq_8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/authentication-operator/1.log" Mar 20 08:39:11.133943 master-0 kubenswrapper[7476]: I0320 08:39:11.129974 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5595498c49-hrfrr_6a6a187d-5b25-4d63-939e-c04e07369371/fix-audit-permissions/0.log" Mar 20 08:39:11.154724 master-0 kubenswrapper[7476]: I0320 08:39:11.154651 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5595498c49-hrfrr_6a6a187d-5b25-4d63-939e-c04e07369371/oauth-apiserver/0.log" Mar 20 08:39:11.179429 master-0 kubenswrapper[7476]: I0320 08:39:11.179149 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-7x9vq_fec3170d-3f3e-42f5-b20a-da53721c0dac/etcd-operator/0.log" Mar 20 08:39:11.184981 master-0 kubenswrapper[7476]: I0320 08:39:11.184947 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-7x9vq_fec3170d-3f3e-42f5-b20a-da53721c0dac/etcd-operator/1.log" Mar 20 08:39:11.201290 master-0 kubenswrapper[7476]: I0320 08:39:11.201158 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/setup/0.log" Mar 20 08:39:11.219290 master-0 kubenswrapper[7476]: I0320 08:39:11.215808 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-ensure-env-vars/0.log" Mar 20 08:39:11.230288 master-0 kubenswrapper[7476]: I0320 08:39:11.227193 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-resources-copy/0.log" Mar 20 08:39:11.254290 master-0 kubenswrapper[7476]: I0320 08:39:11.252146 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 20 08:39:11.272286 master-0 kubenswrapper[7476]: I0320 08:39:11.271797 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 20 08:39:11.310290 master-0 kubenswrapper[7476]: I0320 08:39:11.309859 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 20 08:39:11.335336 master-0 kubenswrapper[7476]: I0320 08:39:11.335282 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v"] Mar 20 08:39:11.499818 master-0 kubenswrapper[7476]: I0320 08:39:11.499767 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-readyz/0.log" Mar 20 08:39:11.614515 master-0 kubenswrapper[7476]: I0320 08:39:11.614415 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9"] Mar 20 08:39:11.615203 master-0 kubenswrapper[7476]: I0320 08:39:11.615180 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg"] Mar 20 08:39:11.615433 master-0 kubenswrapper[7476]: I0320 08:39:11.615394 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:39:11.615702 master-0 kubenswrapper[7476]: I0320 08:39:11.615669 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:11.617251 master-0 kubenswrapper[7476]: I0320 08:39:11.617208 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 20 08:39:11.618776 master-0 kubenswrapper[7476]: I0320 08:39:11.618729 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-7dcf5569b5-kvmtp"] Mar 20 08:39:11.619578 master-0 kubenswrapper[7476]: I0320 08:39:11.619553 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.621753 master-0 kubenswrapper[7476]: I0320 08:39:11.621710 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 08:39:11.622407 master-0 kubenswrapper[7476]: I0320 08:39:11.622362 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 08:39:11.622470 master-0 kubenswrapper[7476]: I0320 08:39:11.622412 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 08:39:11.622609 master-0 kubenswrapper[7476]: I0320 08:39:11.622587 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 08:39:11.623923 master-0 kubenswrapper[7476]: I0320 08:39:11.623878 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 08:39:11.624162 master-0 kubenswrapper[7476]: I0320 08:39:11.624139 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 08:39:11.626473 master-0 kubenswrapper[7476]: I0320 08:39:11.626437 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg"] Mar 20 08:39:11.628716 master-0 kubenswrapper[7476]: I0320 08:39:11.628666 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9"] Mar 20 08:39:11.641864 master-0 kubenswrapper[7476]: I0320 08:39:11.641523 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vzrlt"] Mar 20 08:39:11.642380 master-0 kubenswrapper[7476]: I0320 08:39:11.642341 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.644949 master-0 kubenswrapper[7476]: I0320 08:39:11.644901 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 08:39:11.645122 master-0 kubenswrapper[7476]: I0320 08:39:11.645093 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 08:39:11.645256 master-0 kubenswrapper[7476]: I0320 08:39:11.645229 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-9mkkw" Mar 20 08:39:11.645550 master-0 kubenswrapper[7476]: I0320 08:39:11.645402 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 08:39:11.665026 master-0 kubenswrapper[7476]: I0320 08:39:11.664879 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vzrlt"] Mar 20 08:39:11.698395 master-0 kubenswrapper[7476]: I0320 08:39:11.698355 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 20 08:39:11.739824 master-0 kubenswrapper[7476]: I0320 08:39:11.739753 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/14ef046f-b284-457f-ad7a-b7958cb82dd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-kh8bg\" (UID: \"14ef046f-b284-457f-ad7a-b7958cb82dd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:11.739824 master-0 kubenswrapper[7476]: I0320 08:39:11.739815 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89571b2-098c-495b-9b53-c4ebd95296ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.740110 master-0 kubenswrapper[7476]: I0320 08:39:11.739867 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-stats-auth\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.740110 master-0 kubenswrapper[7476]: I0320 08:39:11.739902 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-default-certificate\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.740110 master-0 kubenswrapper[7476]: I0320 08:39:11.739930 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-metrics-certs\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.740110 master-0 kubenswrapper[7476]: I0320 08:39:11.739958 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncztx\" (UniqueName: \"kubernetes.io/projected/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047-kube-api-access-ncztx\") pod \"network-check-source-b4bf74f6-nnjv9\" (UID: \"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:39:11.740110 master-0 kubenswrapper[7476]: I0320 08:39:11.739994 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw6sv\" (UniqueName: \"kubernetes.io/projected/e89571b2-098c-495b-9b53-c4ebd95296ab-kube-api-access-pw6sv\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.841132 master-0 kubenswrapper[7476]: I0320 08:39:11.841074 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-metrics-certs\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.841385 master-0 kubenswrapper[7476]: I0320 08:39:11.841143 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncztx\" (UniqueName: \"kubernetes.io/projected/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047-kube-api-access-ncztx\") pod \"network-check-source-b4bf74f6-nnjv9\" (UID: \"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:39:11.841385 master-0 kubenswrapper[7476]: I0320 08:39:11.841210 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw6sv\" (UniqueName: \"kubernetes.io/projected/e89571b2-098c-495b-9b53-c4ebd95296ab-kube-api-access-pw6sv\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.841385 master-0 kubenswrapper[7476]: I0320 08:39:11.841242 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmk45\" (UniqueName: \"kubernetes.io/projected/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-kube-api-access-mmk45\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.841385 master-0 kubenswrapper[7476]: I0320 08:39:11.841373 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.841553 master-0 kubenswrapper[7476]: I0320 08:39:11.841414 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/14ef046f-b284-457f-ad7a-b7958cb82dd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-kh8bg\" (UID: \"14ef046f-b284-457f-ad7a-b7958cb82dd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:11.841553 master-0 kubenswrapper[7476]: I0320 08:39:11.841469 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89571b2-098c-495b-9b53-c4ebd95296ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.841553 master-0 kubenswrapper[7476]: I0320 08:39:11.841544 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-stats-auth\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.841669 master-0 kubenswrapper[7476]: I0320 08:39:11.841583 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-default-certificate\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.842636 master-0 kubenswrapper[7476]: I0320 08:39:11.842602 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89571b2-098c-495b-9b53-c4ebd95296ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.844581 master-0 kubenswrapper[7476]: I0320 08:39:11.844531 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-metrics-certs\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.846380 master-0 kubenswrapper[7476]: I0320 08:39:11.846315 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-stats-auth\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.847377 master-0 kubenswrapper[7476]: I0320 08:39:11.847314 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-default-certificate\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.855067 master-0 kubenswrapper[7476]: I0320 08:39:11.855024 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/14ef046f-b284-457f-ad7a-b7958cb82dd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-kh8bg\" (UID: \"14ef046f-b284-457f-ad7a-b7958cb82dd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:11.864769 master-0 kubenswrapper[7476]: I0320 08:39:11.864699 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncztx\" (UniqueName: \"kubernetes.io/projected/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047-kube-api-access-ncztx\") pod \"network-check-source-b4bf74f6-nnjv9\" (UID: \"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:39:11.865118 master-0 kubenswrapper[7476]: I0320 08:39:11.865071 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw6sv\" (UniqueName: \"kubernetes.io/projected/e89571b2-098c-495b-9b53-c4ebd95296ab-kube-api-access-pw6sv\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:11.902059 master-0 kubenswrapper[7476]: I0320 08:39:11.901987 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_169353ee-c927-4483-8976-b9ca08b0a6d1/installer/0.log" Mar 20 08:39:11.940487 master-0 kubenswrapper[7476]: I0320 08:39:11.940363 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:39:11.942609 master-0 kubenswrapper[7476]: I0320 08:39:11.942542 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmk45\" (UniqueName: \"kubernetes.io/projected/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-kube-api-access-mmk45\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.942702 master-0 kubenswrapper[7476]: I0320 08:39:11.942638 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.947551 master-0 kubenswrapper[7476]: I0320 08:39:11.947491 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.951828 master-0 kubenswrapper[7476]: I0320 08:39:11.951696 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"b6679dee4c7242c42fbbf5bfbe50ea41b4c18c644485784e958d4094ec76c7b6"} Mar 20 08:39:11.951828 master-0 kubenswrapper[7476]: I0320 08:39:11.951757 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"ae2ac69f50f92b147c6e0e54b3efd3a8fd07b958b39ed5539b1432cc17005897"} Mar 20 08:39:11.951828 master-0 kubenswrapper[7476]: I0320 08:39:11.951798 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"54f91a8b386ea81f3c1ff44f7cbcccad1987fab184d5bfad4c46374f7827fa5c"} Mar 20 08:39:11.960549 master-0 kubenswrapper[7476]: I0320 08:39:11.960473 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmk45\" (UniqueName: \"kubernetes.io/projected/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-kube-api-access-mmk45\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:11.979328 master-0 kubenswrapper[7476]: I0320 08:39:11.976745 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" podStartSLOduration=4.976715605 podStartE2EDuration="4.976715605s" podCreationTimestamp="2026-03-20 08:39:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:11.975348596 +0000 UTC m=+232.944117152" watchObservedRunningTime="2026-03-20 08:39:11.976715605 +0000 UTC m=+232.945484131" Mar 20 08:39:11.981732 master-0 kubenswrapper[7476]: I0320 08:39:11.981675 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:12.013099 master-0 kubenswrapper[7476]: I0320 08:39:12.013023 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:12.019363 master-0 kubenswrapper[7476]: I0320 08:39:12.019050 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:39:12.070915 master-0 kubenswrapper[7476]: W0320 08:39:12.070862 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode89571b2_098c_495b_9b53_c4ebd95296ab.slice/crio-4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11 WatchSource:0}: Error finding container 4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11: Status 404 returned error can't find the container with id 4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11 Mar 20 08:39:12.101165 master-0 kubenswrapper[7476]: I0320 08:39:12.101121 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-xwkzx_2faf85a2-29bb-4275-a12b-0ef1663a4f0d/kube-apiserver-operator/0.log" Mar 20 08:39:12.200627 master-0 kubenswrapper[7476]: I0320 08:39:12.187335 7476 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:39:12.315558 master-0 kubenswrapper[7476]: I0320 08:39:12.313905 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-xwkzx_2faf85a2-29bb-4275-a12b-0ef1663a4f0d/kube-apiserver-operator/1.log" Mar 20 08:39:12.429773 master-0 kubenswrapper[7476]: I0320 08:39:12.429653 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9"] Mar 20 08:39:12.437002 master-0 kubenswrapper[7476]: W0320 08:39:12.436949 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06bf3fa7_4a9c_4e7f_aa6b_4d4f614ea047.slice/crio-14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be WatchSource:0}: Error finding container 14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be: Status 404 returned error can't find the container with id 14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be Mar 20 08:39:12.510298 master-0 kubenswrapper[7476]: I0320 08:39:12.508838 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/setup/0.log" Mar 20 08:39:12.525627 master-0 kubenswrapper[7476]: I0320 08:39:12.525562 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg"] Mar 20 08:39:12.588691 master-0 kubenswrapper[7476]: I0320 08:39:12.588638 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vzrlt"] Mar 20 08:39:12.703776 master-0 kubenswrapper[7476]: I0320 08:39:12.703677 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver/0.log" Mar 20 08:39:12.896417 master-0 kubenswrapper[7476]: I0320 08:39:12.896361 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_49fac1b46a11e49501805e891baae4a9/kube-apiserver-insecure-readyz/0.log" Mar 20 08:39:12.960339 master-0 kubenswrapper[7476]: I0320 08:39:12.959303 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" event={"ID":"14ef046f-b284-457f-ad7a-b7958cb82dd5","Type":"ContainerStarted","Data":"4a62432d7ca6978a89473ee0ca3560d8d6e151e4b44cc680fcbcde36344cda3f"} Mar 20 08:39:12.964558 master-0 kubenswrapper[7476]: I0320 08:39:12.960617 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11"} Mar 20 08:39:12.964558 master-0 kubenswrapper[7476]: I0320 08:39:12.962836 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vzrlt" event={"ID":"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc","Type":"ContainerStarted","Data":"42b6664c06d1ffb8c94d13f40ec54767633930df25274e60e5a519f6d8259436"} Mar 20 08:39:12.964558 master-0 kubenswrapper[7476]: I0320 08:39:12.962857 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vzrlt" event={"ID":"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc","Type":"ContainerStarted","Data":"b7d9c365d304102d31836e754ae3ccd0da492c6691ee23225b141aea9b82a5d5"} Mar 20 08:39:12.967179 master-0 kubenswrapper[7476]: I0320 08:39:12.965830 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" event={"ID":"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047","Type":"ContainerStarted","Data":"6d7e46e102c5d86e3216541277d0f646eb01b68f76beed85bd56c65d91b3c2bc"} Mar 20 08:39:12.967179 master-0 kubenswrapper[7476]: I0320 08:39:12.965857 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" event={"ID":"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047","Type":"ContainerStarted","Data":"14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be"} Mar 20 08:39:13.025734 master-0 kubenswrapper[7476]: I0320 08:39:13.025629 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vzrlt" podStartSLOduration=2.025600556 podStartE2EDuration="2.025600556s" podCreationTimestamp="2026-03-20 08:39:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:12.987815212 +0000 UTC m=+233.956583738" watchObservedRunningTime="2026-03-20 08:39:13.025600556 +0000 UTC m=+233.994369082" Mar 20 08:39:13.027108 master-0 kubenswrapper[7476]: I0320 08:39:13.027058 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" podStartSLOduration=287.027048357 podStartE2EDuration="4m47.027048357s" podCreationTimestamp="2026-03-20 08:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:13.021994625 +0000 UTC m=+233.990763151" watchObservedRunningTime="2026-03-20 08:39:13.027048357 +0000 UTC m=+233.995816883" Mar 20 08:39:13.100521 master-0 kubenswrapper[7476]: I0320 08:39:13.100462 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_cce21ae1-63de-49be-a027-084a101e650b/installer/0.log" Mar 20 08:39:13.299520 master-0 kubenswrapper[7476]: I0320 08:39:13.299424 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_3ea52b89-46f9-4685-aecd-162ba92baaf5/installer/0.log" Mar 20 08:39:13.504632 master-0 kubenswrapper[7476]: I0320 08:39:13.504582 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:39:13.698325 master-0 kubenswrapper[7476]: I0320 08:39:13.698242 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/0.log" Mar 20 08:39:13.899366 master-0 kubenswrapper[7476]: I0320 08:39:13.899234 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/0.log" Mar 20 08:39:14.098504 master-0 kubenswrapper[7476]: I0320 08:39:14.098423 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-recovery-controller/0.log" Mar 20 08:39:14.305984 master-0 kubenswrapper[7476]: I0320 08:39:14.305937 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-wbfrm_71ca96e8-5108-455c-bb3c-17977d38e912/kube-controller-manager-operator/0.log" Mar 20 08:39:14.499421 master-0 kubenswrapper[7476]: I0320 08:39:14.499258 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-wbfrm_71ca96e8-5108-455c-bb3c-17977d38e912/kube-controller-manager-operator/1.log" Mar 20 08:39:14.593067 master-0 kubenswrapper[7476]: I0320 08:39:14.592934 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6bd59"] Mar 20 08:39:14.593699 master-0 kubenswrapper[7476]: I0320 08:39:14.593682 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.595539 master-0 kubenswrapper[7476]: I0320 08:39:14.595503 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 08:39:14.595804 master-0 kubenswrapper[7476]: I0320 08:39:14.595787 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 08:39:14.595974 master-0 kubenswrapper[7476]: I0320 08:39:14.595944 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bcc6b" Mar 20 08:39:14.703424 master-0 kubenswrapper[7476]: I0320 08:39:14.703290 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/0.log" Mar 20 08:39:14.788024 master-0 kubenswrapper[7476]: I0320 08:39:14.787898 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.788024 master-0 kubenswrapper[7476]: I0320 08:39:14.787989 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.788217 master-0 kubenswrapper[7476]: I0320 08:39:14.788044 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btwhr\" (UniqueName: \"kubernetes.io/projected/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-kube-api-access-btwhr\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.889931 master-0 kubenswrapper[7476]: I0320 08:39:14.889863 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btwhr\" (UniqueName: \"kubernetes.io/projected/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-kube-api-access-btwhr\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.890131 master-0 kubenswrapper[7476]: I0320 08:39:14.889954 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.890131 master-0 kubenswrapper[7476]: I0320 08:39:14.890002 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.894625 master-0 kubenswrapper[7476]: I0320 08:39:14.894572 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.896126 master-0 kubenswrapper[7476]: I0320 08:39:14.896086 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.905799 master-0 kubenswrapper[7476]: I0320 08:39:14.905750 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_c83737980b9ee109184b1d78e942cf36/kube-scheduler/1.log" Mar 20 08:39:14.914605 master-0 kubenswrapper[7476]: I0320 08:39:14.914563 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btwhr\" (UniqueName: \"kubernetes.io/projected/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-kube-api-access-btwhr\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.946823 master-0 kubenswrapper[7476]: I0320 08:39:14.946739 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:39:14.976669 master-0 kubenswrapper[7476]: W0320 08:39:14.976586 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda79bf8fb_19fb_4881_b9e3_b5b21fde0e1d.slice/crio-1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee WatchSource:0}: Error finding container 1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee: Status 404 returned error can't find the container with id 1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee Mar 20 08:39:14.988627 master-0 kubenswrapper[7476]: I0320 08:39:14.986893 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"9f3b47575a455c1af61754677babc355c6032015c14d444d604fb6bbfbe54a24"} Mar 20 08:39:15.002171 master-0 kubenswrapper[7476]: I0320 08:39:15.002120 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" event={"ID":"14ef046f-b284-457f-ad7a-b7958cb82dd5","Type":"ContainerStarted","Data":"3db3dae8349b6f2fc1d58cbe0c7f2270fa08bb8391e64b4cb41d884ee532ec9d"} Mar 20 08:39:15.002687 master-0 kubenswrapper[7476]: I0320 08:39:15.002636 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:15.008152 master-0 kubenswrapper[7476]: I0320 08:39:15.008108 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:39:15.014341 master-0 kubenswrapper[7476]: I0320 08:39:15.014134 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:15.017076 master-0 kubenswrapper[7476]: I0320 08:39:15.017025 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:15.017076 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:15.017076 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:15.017076 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:15.017324 master-0 kubenswrapper[7476]: I0320 08:39:15.017112 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:15.052368 master-0 kubenswrapper[7476]: I0320 08:39:15.052181 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" podStartSLOduration=184.069579393 podStartE2EDuration="3m6.052155685s" podCreationTimestamp="2026-03-20 08:36:09 +0000 UTC" firstStartedPulling="2026-03-20 08:39:12.532165883 +0000 UTC m=+233.500934409" lastFinishedPulling="2026-03-20 08:39:14.514742165 +0000 UTC m=+235.483510701" observedRunningTime="2026-03-20 08:39:15.05194738 +0000 UTC m=+236.020715946" watchObservedRunningTime="2026-03-20 08:39:15.052155685 +0000 UTC m=+236.020924231" Mar 20 08:39:15.054459 master-0 kubenswrapper[7476]: I0320 08:39:15.054400 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podStartSLOduration=200.614555103 podStartE2EDuration="3m23.054391169s" podCreationTimestamp="2026-03-20 08:35:52 +0000 UTC" firstStartedPulling="2026-03-20 08:39:12.07495057 +0000 UTC m=+233.043719116" lastFinishedPulling="2026-03-20 08:39:14.514786656 +0000 UTC m=+235.483555182" observedRunningTime="2026-03-20 08:39:15.025945008 +0000 UTC m=+235.994713594" watchObservedRunningTime="2026-03-20 08:39:15.054391169 +0000 UTC m=+236.023159705" Mar 20 08:39:15.103981 master-0 kubenswrapper[7476]: I0320 08:39:15.103939 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_84b1b51a-cbfa-42de-9fb8-315e9cb76b58/installer/0.log" Mar 20 08:39:15.298239 master-0 kubenswrapper[7476]: I0320 08:39:15.298165 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr"] Mar 20 08:39:15.299351 master-0 kubenswrapper[7476]: I0320 08:39:15.299319 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.301903 master-0 kubenswrapper[7476]: I0320 08:39:15.301848 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 20 08:39:15.304123 master-0 kubenswrapper[7476]: I0320 08:39:15.304036 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 20 08:39:15.304206 master-0 kubenswrapper[7476]: I0320 08:39:15.304140 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 20 08:39:15.304256 master-0 kubenswrapper[7476]: I0320 08:39:15.304216 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vmwqt_65157a9b-3df7-4cc1-a85a-a5dfa59921ad/kube-scheduler-operator-container/0.log" Mar 20 08:39:15.304970 master-0 kubenswrapper[7476]: I0320 08:39:15.304939 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.305047 master-0 kubenswrapper[7476]: I0320 08:39:15.304998 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lnpz\" (UniqueName: \"kubernetes.io/projected/0ad95adc-2e0f-4e95-94e7-66e6d240a930-kube-api-access-5lnpz\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.305047 master-0 kubenswrapper[7476]: I0320 08:39:15.305035 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.305129 master-0 kubenswrapper[7476]: I0320 08:39:15.305068 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.305965 master-0 kubenswrapper[7476]: I0320 08:39:15.305924 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-vkksc" Mar 20 08:39:15.322514 master-0 kubenswrapper[7476]: I0320 08:39:15.322467 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr"] Mar 20 08:39:15.406040 master-0 kubenswrapper[7476]: I0320 08:39:15.405982 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.406476 master-0 kubenswrapper[7476]: I0320 08:39:15.406453 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.406619 master-0 kubenswrapper[7476]: I0320 08:39:15.406601 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lnpz\" (UniqueName: \"kubernetes.io/projected/0ad95adc-2e0f-4e95-94e7-66e6d240a930-kube-api-access-5lnpz\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.406744 master-0 kubenswrapper[7476]: I0320 08:39:15.406724 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.407727 master-0 kubenswrapper[7476]: I0320 08:39:15.407674 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.409484 master-0 kubenswrapper[7476]: I0320 08:39:15.409449 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.410010 master-0 kubenswrapper[7476]: I0320 08:39:15.409975 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.422780 master-0 kubenswrapper[7476]: I0320 08:39:15.422735 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lnpz\" (UniqueName: \"kubernetes.io/projected/0ad95adc-2e0f-4e95-94e7-66e6d240a930-kube-api-access-5lnpz\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.497847 master-0 kubenswrapper[7476]: I0320 08:39:15.497787 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-dddff6458-vmwqt_65157a9b-3df7-4cc1-a85a-a5dfa59921ad/kube-scheduler-operator-container/1.log" Mar 20 08:39:15.624340 master-0 kubenswrapper[7476]: I0320 08:39:15.624256 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:39:15.706612 master-0 kubenswrapper[7476]: I0320 08:39:15.705011 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-ntdqc_2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/openshift-apiserver-operator/0.log" Mar 20 08:39:15.899501 master-0 kubenswrapper[7476]: I0320 08:39:15.899341 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-d65958b8-ntdqc_2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/openshift-apiserver-operator/1.log" Mar 20 08:39:16.013332 master-0 kubenswrapper[7476]: I0320 08:39:16.013243 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6bd59" event={"ID":"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d","Type":"ContainerStarted","Data":"0cbc6ea3aa68035035f3da1cfce1750cdbc80b56e682b2ecd9f2dcdc8b0d9d3c"} Mar 20 08:39:16.013332 master-0 kubenswrapper[7476]: I0320 08:39:16.013334 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6bd59" event={"ID":"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d","Type":"ContainerStarted","Data":"1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee"} Mar 20 08:39:16.016583 master-0 kubenswrapper[7476]: I0320 08:39:16.016546 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:16.016583 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:16.016583 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:16.016583 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:16.016873 master-0 kubenswrapper[7476]: I0320 08:39:16.016596 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:16.038391 master-0 kubenswrapper[7476]: I0320 08:39:16.037411 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6bd59" podStartSLOduration=2.037391666 podStartE2EDuration="2.037391666s" podCreationTimestamp="2026-03-20 08:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:39:16.036149941 +0000 UTC m=+237.004918457" watchObservedRunningTime="2026-03-20 08:39:16.037391666 +0000 UTC m=+237.006160192" Mar 20 08:39:16.050429 master-0 kubenswrapper[7476]: W0320 08:39:16.050339 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ad95adc_2e0f_4e95_94e7_66e6d240a930.slice/crio-0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421 WatchSource:0}: Error finding container 0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421: Status 404 returned error can't find the container with id 0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421 Mar 20 08:39:16.051393 master-0 kubenswrapper[7476]: I0320 08:39:16.051329 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr"] Mar 20 08:39:16.096783 master-0 kubenswrapper[7476]: I0320 08:39:16.096695 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-64b65cddf5-gx7h7_ca56e37d-80ea-432b-a6d9-f4e904a40e10/fix-audit-permissions/0.log" Mar 20 08:39:16.301467 master-0 kubenswrapper[7476]: I0320 08:39:16.301332 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-64b65cddf5-gx7h7_ca56e37d-80ea-432b-a6d9-f4e904a40e10/openshift-apiserver/0.log" Mar 20 08:39:16.499659 master-0 kubenswrapper[7476]: I0320 08:39:16.499592 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-64b65cddf5-gx7h7_ca56e37d-80ea-432b-a6d9-f4e904a40e10/openshift-apiserver-check-endpoints/0.log" Mar 20 08:39:16.708295 master-0 kubenswrapper[7476]: I0320 08:39:16.705646 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-7x9vq_fec3170d-3f3e-42f5-b20a-da53721c0dac/etcd-operator/0.log" Mar 20 08:39:16.896975 master-0 kubenswrapper[7476]: I0320 08:39:16.896928 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-7x9vq_fec3170d-3f3e-42f5-b20a-da53721c0dac/etcd-operator/1.log" Mar 20 08:39:17.018787 master-0 kubenswrapper[7476]: I0320 08:39:17.018558 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:17.018787 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:17.018787 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:17.018787 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:17.018787 master-0 kubenswrapper[7476]: I0320 08:39:17.018609 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:17.022105 master-0 kubenswrapper[7476]: I0320 08:39:17.021368 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" event={"ID":"0ad95adc-2e0f-4e95-94e7-66e6d240a930","Type":"ContainerStarted","Data":"0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421"} Mar 20 08:39:17.098111 master-0 kubenswrapper[7476]: I0320 08:39:17.098062 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/1.log" Mar 20 08:39:17.298121 master-0 kubenswrapper[7476]: I0320 08:39:17.297985 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/2.log" Mar 20 08:39:17.502209 master-0 kubenswrapper[7476]: I0320 08:39:17.502154 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-65b46449cf-9fccc_c200f016-3922-4e90-9061-92fd8c3fad2b/controller-manager/0.log" Mar 20 08:39:17.704625 master-0 kubenswrapper[7476]: I0320 08:39:17.704541 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-65b46449cf-9fccc_c200f016-3922-4e90-9061-92fd8c3fad2b/controller-manager/1.log" Mar 20 08:39:17.905780 master-0 kubenswrapper[7476]: I0320 08:39:17.905703 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-8488874649-cdk48_f67db558-998e-48e3-9b55-b96029ec000c/route-controller-manager/0.log" Mar 20 08:39:18.017116 master-0 kubenswrapper[7476]: I0320 08:39:18.017058 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:18.017116 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:18.017116 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:18.017116 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:18.017422 master-0 kubenswrapper[7476]: I0320 08:39:18.017123 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:18.033768 master-0 kubenswrapper[7476]: I0320 08:39:18.033694 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" event={"ID":"0ad95adc-2e0f-4e95-94e7-66e6d240a930","Type":"ContainerStarted","Data":"56997bf494f7ffbedb66bcbf6610659e36f8f3fa9ec2d8530300e2d0acb9f78b"} Mar 20 08:39:18.098681 master-0 kubenswrapper[7476]: I0320 08:39:18.098620 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-68f85b4d6c-hdw98_9ce482dc-d0ac-40bc-9058-a1cfdc81575e/catalog-operator/0.log" Mar 20 08:39:18.302572 master-0 kubenswrapper[7476]: I0320 08:39:18.302338 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-5c9796789-t926t_7ab32efc-7cc5-4e36-9c1c-05efb19914e2/olm-operator/0.log" Mar 20 08:39:18.526998 master-0 kubenswrapper[7476]: I0320 08:39:18.526848 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-cgc9q_0e79950f-50a5-46ec-b836-7a35dcce2851/kube-rbac-proxy/0.log" Mar 20 08:39:18.701152 master-0 kubenswrapper[7476]: I0320 08:39:18.701098 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-7b95f86987-cgc9q_0e79950f-50a5-46ec-b836-7a35dcce2851/package-server-manager/0.log" Mar 20 08:39:18.904752 master-0 kubenswrapper[7476]: I0320 08:39:18.904676 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_packageserver-6f5545c99f-6sl9d_0cb6d987-4b59-4fd9-889a-3250c12a726c/packageserver/0.log" Mar 20 08:39:19.016345 master-0 kubenswrapper[7476]: I0320 08:39:19.016219 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:19.016345 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:19.016345 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:19.016345 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:19.016345 master-0 kubenswrapper[7476]: I0320 08:39:19.016307 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:19.041707 master-0 kubenswrapper[7476]: I0320 08:39:19.041667 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" event={"ID":"0ad95adc-2e0f-4e95-94e7-66e6d240a930","Type":"ContainerStarted","Data":"bf10888ccde1979b427a9f3adbf9a108bfcc6b88d387b1a05a20f1ae280a50fd"} Mar 20 08:39:19.065548 master-0 kubenswrapper[7476]: I0320 08:39:19.065438 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" podStartSLOduration=2.3699786449999998 podStartE2EDuration="4.065415092s" podCreationTimestamp="2026-03-20 08:39:15 +0000 UTC" firstStartedPulling="2026-03-20 08:39:16.052384738 +0000 UTC m=+237.021153264" lastFinishedPulling="2026-03-20 08:39:17.747821185 +0000 UTC m=+238.716589711" observedRunningTime="2026-03-20 08:39:19.057860939 +0000 UTC m=+240.026629505" watchObservedRunningTime="2026-03-20 08:39:19.065415092 +0000 UTC m=+240.034183658" Mar 20 08:39:20.019219 master-0 kubenswrapper[7476]: I0320 08:39:20.019093 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:20.019219 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:20.019219 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:20.019219 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:20.019938 master-0 kubenswrapper[7476]: I0320 08:39:20.019229 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:20.669209 master-0 kubenswrapper[7476]: I0320 08:39:20.669143 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg"] Mar 20 08:39:20.670865 master-0 kubenswrapper[7476]: I0320 08:39:20.670828 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.671653 master-0 kubenswrapper[7476]: I0320 08:39:20.671608 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-28l2x"] Mar 20 08:39:20.672662 master-0 kubenswrapper[7476]: I0320 08:39:20.672632 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.673042 master-0 kubenswrapper[7476]: I0320 08:39:20.673017 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-6825q" Mar 20 08:39:20.676550 master-0 kubenswrapper[7476]: I0320 08:39:20.676521 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-rzg98"] Mar 20 08:39:20.677659 master-0 kubenswrapper[7476]: I0320 08:39:20.677645 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.679156 master-0 kubenswrapper[7476]: I0320 08:39:20.679121 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 20 08:39:20.679796 master-0 kubenswrapper[7476]: I0320 08:39:20.679648 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 20 08:39:20.679796 master-0 kubenswrapper[7476]: I0320 08:39:20.679649 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vlmsv" Mar 20 08:39:20.679957 master-0 kubenswrapper[7476]: I0320 08:39:20.679946 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 20 08:39:20.683047 master-0 kubenswrapper[7476]: I0320 08:39:20.683027 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 20 08:39:20.684425 master-0 kubenswrapper[7476]: I0320 08:39:20.684409 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 20 08:39:20.686650 master-0 kubenswrapper[7476]: I0320 08:39:20.686621 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 20 08:39:20.686819 master-0 kubenswrapper[7476]: I0320 08:39:20.686803 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-dbtrl" Mar 20 08:39:20.687364 master-0 kubenswrapper[7476]: I0320 08:39:20.687347 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 20 08:39:20.706952 master-0 kubenswrapper[7476]: I0320 08:39:20.706894 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg"] Mar 20 08:39:20.717044 master-0 kubenswrapper[7476]: I0320 08:39:20.716995 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-28l2x"] Mar 20 08:39:20.805147 master-0 kubenswrapper[7476]: I0320 08:39:20.805090 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-textfile\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805168 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805192 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805214 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805231 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805251 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805282 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805306 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805330 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805358 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/44bc88d8-9e01-4521-a704-85d9ca095baa-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805378 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdqzn\" (UniqueName: \"kubernetes.io/projected/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-api-access-hdqzn\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805395 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-root\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805414 master-0 kubenswrapper[7476]: I0320 08:39:20.805408 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-sys\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805897 master-0 kubenswrapper[7476]: I0320 08:39:20.805429 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.805897 master-0 kubenswrapper[7476]: I0320 08:39:20.805451 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfgfz\" (UniqueName: \"kubernetes.io/projected/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-kube-api-access-tfgfz\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.805897 master-0 kubenswrapper[7476]: I0320 08:39:20.805468 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.805897 master-0 kubenswrapper[7476]: I0320 08:39:20.805609 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtt44\" (UniqueName: \"kubernetes.io/projected/123f1ecb-cc03-462b-b76f-7251bf69d3d6-kube-api-access-dtt44\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.805897 master-0 kubenswrapper[7476]: I0320 08:39:20.805695 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-wtmp\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.907318 master-0 kubenswrapper[7476]: I0320 08:39:20.907249 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/44bc88d8-9e01-4521-a704-85d9ca095baa-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.907318 master-0 kubenswrapper[7476]: I0320 08:39:20.907317 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqzn\" (UniqueName: \"kubernetes.io/projected/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-api-access-hdqzn\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.907531 master-0 kubenswrapper[7476]: I0320 08:39:20.907336 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-root\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.907531 master-0 kubenswrapper[7476]: I0320 08:39:20.907353 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-sys\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.907531 master-0 kubenswrapper[7476]: I0320 08:39:20.907498 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.907611 master-0 kubenswrapper[7476]: I0320 08:39:20.907561 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-sys\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.907802 master-0 kubenswrapper[7476]: I0320 08:39:20.907766 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-root\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.907919 master-0 kubenswrapper[7476]: I0320 08:39:20.907863 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgfz\" (UniqueName: \"kubernetes.io/projected/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-kube-api-access-tfgfz\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.907919 master-0 kubenswrapper[7476]: I0320 08:39:20.907887 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/44bc88d8-9e01-4521-a704-85d9ca095baa-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.908178 master-0 kubenswrapper[7476]: I0320 08:39:20.908144 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.908256 master-0 kubenswrapper[7476]: I0320 08:39:20.908219 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtt44\" (UniqueName: \"kubernetes.io/projected/123f1ecb-cc03-462b-b76f-7251bf69d3d6-kube-api-access-dtt44\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908492 master-0 kubenswrapper[7476]: I0320 08:39:20.908466 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-wtmp\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908536 master-0 kubenswrapper[7476]: I0320 08:39:20.908487 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.908536 master-0 kubenswrapper[7476]: I0320 08:39:20.908512 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-textfile\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908536 master-0 kubenswrapper[7476]: I0320 08:39:20.908487 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-wtmp\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908658 master-0 kubenswrapper[7476]: I0320 08:39:20.908552 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908658 master-0 kubenswrapper[7476]: I0320 08:39:20.908582 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.908710 master-0 kubenswrapper[7476]: I0320 08:39:20.908656 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.908710 master-0 kubenswrapper[7476]: I0320 08:39:20.908689 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.908767 master-0 kubenswrapper[7476]: I0320 08:39:20.908718 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.908767 master-0 kubenswrapper[7476]: I0320 08:39:20.908747 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908821 master-0 kubenswrapper[7476]: I0320 08:39:20.908779 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.908821 master-0 kubenswrapper[7476]: I0320 08:39:20.908806 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.908876 master-0 kubenswrapper[7476]: I0320 08:39:20.908865 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-textfile\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.909642 master-0 kubenswrapper[7476]: E0320 08:39:20.908919 7476 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Mar 20 08:39:20.909642 master-0 kubenswrapper[7476]: E0320 08:39:20.908981 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls podName:44bc88d8-9e01-4521-a704-85d9ca095baa nodeName:}" failed. No retries permitted until 2026-03-20 08:39:21.408965179 +0000 UTC m=+242.377733705 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-28l2x" (UID: "44bc88d8-9e01-4521-a704-85d9ca095baa") : secret "kube-state-metrics-tls" not found Mar 20 08:39:20.909642 master-0 kubenswrapper[7476]: E0320 08:39:20.909196 7476 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 20 08:39:20.909642 master-0 kubenswrapper[7476]: I0320 08:39:20.909211 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.909642 master-0 kubenswrapper[7476]: E0320 08:39:20.909240 7476 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls podName:123f1ecb-cc03-462b-b76f-7251bf69d3d6 nodeName:}" failed. No retries permitted until 2026-03-20 08:39:21.409228446 +0000 UTC m=+242.377997072 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls") pod "node-exporter-rzg98" (UID: "123f1ecb-cc03-462b-b76f-7251bf69d3d6") : secret "node-exporter-tls" not found Mar 20 08:39:20.909642 master-0 kubenswrapper[7476]: I0320 08:39:20.909332 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.909875 master-0 kubenswrapper[7476]: I0320 08:39:20.909816 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.912514 master-0 kubenswrapper[7476]: I0320 08:39:20.912363 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.912514 master-0 kubenswrapper[7476]: I0320 08:39:20.912479 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.913481 master-0 kubenswrapper[7476]: I0320 08:39:20.913451 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.917599 master-0 kubenswrapper[7476]: I0320 08:39:20.916849 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.926786 master-0 kubenswrapper[7476]: I0320 08:39:20.926682 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqzn\" (UniqueName: \"kubernetes.io/projected/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-api-access-hdqzn\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:20.928964 master-0 kubenswrapper[7476]: I0320 08:39:20.928920 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtt44\" (UniqueName: \"kubernetes.io/projected/123f1ecb-cc03-462b-b76f-7251bf69d3d6-kube-api-access-dtt44\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:20.936365 master-0 kubenswrapper[7476]: I0320 08:39:20.936327 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgfz\" (UniqueName: \"kubernetes.io/projected/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-kube-api-access-tfgfz\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:20.991356 master-0 kubenswrapper[7476]: I0320 08:39:20.988152 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:39:21.019316 master-0 kubenswrapper[7476]: I0320 08:39:21.016509 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:21.019316 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:21.019316 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:21.019316 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:21.019316 master-0 kubenswrapper[7476]: I0320 08:39:21.016556 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:21.395529 master-0 kubenswrapper[7476]: I0320 08:39:21.395164 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg"] Mar 20 08:39:21.414975 master-0 kubenswrapper[7476]: I0320 08:39:21.414918 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:21.414975 master-0 kubenswrapper[7476]: I0320 08:39:21.414976 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:21.417867 master-0 kubenswrapper[7476]: I0320 08:39:21.417837 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:21.419456 master-0 kubenswrapper[7476]: I0320 08:39:21.419416 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:21.638661 master-0 kubenswrapper[7476]: I0320 08:39:21.638594 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:39:21.651987 master-0 kubenswrapper[7476]: I0320 08:39:21.651884 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:39:21.676802 master-0 kubenswrapper[7476]: W0320 08:39:21.676737 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod123f1ecb_cc03_462b_b76f_7251bf69d3d6.slice/crio-cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218 WatchSource:0}: Error finding container cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218: Status 404 returned error can't find the container with id cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218 Mar 20 08:39:22.014986 master-0 kubenswrapper[7476]: I0320 08:39:22.013995 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:39:22.016169 master-0 kubenswrapper[7476]: I0320 08:39:22.016128 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:22.016169 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:22.016169 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:22.016169 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:22.016456 master-0 kubenswrapper[7476]: I0320 08:39:22.016210 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:22.059192 master-0 kubenswrapper[7476]: I0320 08:39:22.059089 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-7bbc969446-28l2x"] Mar 20 08:39:22.065371 master-0 kubenswrapper[7476]: W0320 08:39:22.065301 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44bc88d8_9e01_4521_a704_85d9ca095baa.slice/crio-68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a WatchSource:0}: Error finding container 68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a: Status 404 returned error can't find the container with id 68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a Mar 20 08:39:22.065489 master-0 kubenswrapper[7476]: I0320 08:39:22.065385 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerStarted","Data":"cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218"} Mar 20 08:39:22.068867 master-0 kubenswrapper[7476]: I0320 08:39:22.068654 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"fef2a33ba0f77ba9f48caf8a72fb3567bbb02f7cff7f70d80acb4acac86e7062"} Mar 20 08:39:22.068867 master-0 kubenswrapper[7476]: I0320 08:39:22.068710 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"75ee30752038facd89d76f05a1b5b8d9abb32492d825eaef487a4eb2de3b955c"} Mar 20 08:39:22.068867 master-0 kubenswrapper[7476]: I0320 08:39:22.068724 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"64ca7ad287a18077a9681b1e546ec20fe155067ef4ae153360b9f6ad5ecbcb02"} Mar 20 08:39:23.017057 master-0 kubenswrapper[7476]: I0320 08:39:23.016990 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:23.017057 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:23.017057 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:23.017057 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:23.019185 master-0 kubenswrapper[7476]: I0320 08:39:23.017089 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:23.080182 master-0 kubenswrapper[7476]: I0320 08:39:23.080088 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a"} Mar 20 08:39:24.017086 master-0 kubenswrapper[7476]: I0320 08:39:24.017016 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:24.017086 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:24.017086 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:24.017086 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:24.017556 master-0 kubenswrapper[7476]: I0320 08:39:24.017126 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:24.090400 master-0 kubenswrapper[7476]: I0320 08:39:24.090357 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"8d518eacb100580b01b9095670b6acba2810ec52aaec3061b31829e5e84f61cd"} Mar 20 08:39:24.092733 master-0 kubenswrapper[7476]: I0320 08:39:24.092684 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"5b029982bb8223e2d49a4aec4d3e62ad49f8bc617c5cc9b42609f637cba43a3d"} Mar 20 08:39:24.092817 master-0 kubenswrapper[7476]: I0320 08:39:24.092745 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"9adde0990da1601e7c45d9ff5871aad1c483c142165792a1910a5516c06340cd"} Mar 20 08:39:24.094151 master-0 kubenswrapper[7476]: I0320 08:39:24.094092 7476 generic.go:334] "Generic (PLEG): container finished" podID="123f1ecb-cc03-462b-b76f-7251bf69d3d6" containerID="96058a0b48f5954e1e280e02b2139f100552b410ebee73d3b0fd6e4aa44bd764" exitCode=0 Mar 20 08:39:24.094151 master-0 kubenswrapper[7476]: I0320 08:39:24.094128 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerDied","Data":"96058a0b48f5954e1e280e02b2139f100552b410ebee73d3b0fd6e4aa44bd764"} Mar 20 08:39:24.120473 master-0 kubenswrapper[7476]: I0320 08:39:24.120281 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" podStartSLOduration=2.203574719 podStartE2EDuration="4.120243234s" podCreationTimestamp="2026-03-20 08:39:20 +0000 UTC" firstStartedPulling="2026-03-20 08:39:21.622918991 +0000 UTC m=+242.591687527" lastFinishedPulling="2026-03-20 08:39:23.539587496 +0000 UTC m=+244.508356042" observedRunningTime="2026-03-20 08:39:24.118143906 +0000 UTC m=+245.086912452" watchObservedRunningTime="2026-03-20 08:39:24.120243234 +0000 UTC m=+245.089011770" Mar 20 08:39:25.017153 master-0 kubenswrapper[7476]: I0320 08:39:25.017089 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:25.017153 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:25.017153 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:25.017153 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:25.017856 master-0 kubenswrapper[7476]: I0320 08:39:25.017175 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:25.106892 master-0 kubenswrapper[7476]: I0320 08:39:25.106784 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"b54793d2ae72eaa686a000eb046e04a8533997e94a640f1f1144e3a41428dff5"} Mar 20 08:39:25.109822 master-0 kubenswrapper[7476]: I0320 08:39:25.109787 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerStarted","Data":"041d536d7a31ab427a13e6da1dbb01874ab9eb6236af8cb0e9a5a4754e2a0ca5"} Mar 20 08:39:25.109998 master-0 kubenswrapper[7476]: I0320 08:39:25.109828 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerStarted","Data":"62e3d29fd4bdc39b630fb30c9d703d2124f8b51733ff477225f86e593bad914c"} Mar 20 08:39:25.148218 master-0 kubenswrapper[7476]: I0320 08:39:25.148079 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" podStartSLOduration=3.611978724 podStartE2EDuration="5.148042583s" podCreationTimestamp="2026-03-20 08:39:20 +0000 UTC" firstStartedPulling="2026-03-20 08:39:22.068537547 +0000 UTC m=+243.037306083" lastFinishedPulling="2026-03-20 08:39:23.604601416 +0000 UTC m=+244.573369942" observedRunningTime="2026-03-20 08:39:25.140197272 +0000 UTC m=+246.108965838" watchObservedRunningTime="2026-03-20 08:39:25.148042583 +0000 UTC m=+246.116811149" Mar 20 08:39:25.184744 master-0 kubenswrapper[7476]: I0320 08:39:25.184663 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-rzg98" podStartSLOduration=3.329937264 podStartE2EDuration="5.184643924s" podCreationTimestamp="2026-03-20 08:39:20 +0000 UTC" firstStartedPulling="2026-03-20 08:39:21.685647237 +0000 UTC m=+242.654415763" lastFinishedPulling="2026-03-20 08:39:23.540353897 +0000 UTC m=+244.509122423" observedRunningTime="2026-03-20 08:39:25.182599477 +0000 UTC m=+246.151368003" watchObservedRunningTime="2026-03-20 08:39:25.184643924 +0000 UTC m=+246.153412450" Mar 20 08:39:26.020225 master-0 kubenswrapper[7476]: I0320 08:39:26.020143 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:26.020225 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:26.020225 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:26.020225 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:26.021186 master-0 kubenswrapper[7476]: I0320 08:39:26.020257 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:26.139393 master-0 kubenswrapper[7476]: I0320 08:39:26.139293 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-55d84d7794-56n4c"] Mar 20 08:39:26.141963 master-0 kubenswrapper[7476]: I0320 08:39:26.140499 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.145423 master-0 kubenswrapper[7476]: I0320 08:39:26.143853 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 20 08:39:26.145423 master-0 kubenswrapper[7476]: I0320 08:39:26.144073 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 20 08:39:26.145423 master-0 kubenswrapper[7476]: I0320 08:39:26.144468 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 20 08:39:26.145423 master-0 kubenswrapper[7476]: I0320 08:39:26.145368 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-kwmwv" Mar 20 08:39:26.145968 master-0 kubenswrapper[7476]: I0320 08:39:26.145914 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 20 08:39:26.146783 master-0 kubenswrapper[7476]: I0320 08:39:26.146736 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7i2lh8fo12r60" Mar 20 08:39:26.147523 master-0 kubenswrapper[7476]: I0320 08:39:26.147474 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-55d84d7794-56n4c"] Mar 20 08:39:26.291404 master-0 kubenswrapper[7476]: I0320 08:39:26.291277 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.291404 master-0 kubenswrapper[7476]: I0320 08:39:26.291358 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.291646 master-0 kubenswrapper[7476]: I0320 08:39:26.291434 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.291646 master-0 kubenswrapper[7476]: I0320 08:39:26.291474 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.291646 master-0 kubenswrapper[7476]: I0320 08:39:26.291510 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.298410 master-0 kubenswrapper[7476]: I0320 08:39:26.291563 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.298410 master-0 kubenswrapper[7476]: I0320 08:39:26.297866 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.399554 master-0 kubenswrapper[7476]: I0320 08:39:26.399466 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.399554 master-0 kubenswrapper[7476]: I0320 08:39:26.399565 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.399894 master-0 kubenswrapper[7476]: I0320 08:39:26.399780 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.400658 master-0 kubenswrapper[7476]: I0320 08:39:26.400581 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.400767 master-0 kubenswrapper[7476]: I0320 08:39:26.400694 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.400928 master-0 kubenswrapper[7476]: I0320 08:39:26.400875 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.402626 master-0 kubenswrapper[7476]: I0320 08:39:26.401090 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.402626 master-0 kubenswrapper[7476]: I0320 08:39:26.401172 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.402626 master-0 kubenswrapper[7476]: I0320 08:39:26.402486 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.403077 master-0 kubenswrapper[7476]: I0320 08:39:26.402724 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.404199 master-0 kubenswrapper[7476]: I0320 08:39:26.404015 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.405843 master-0 kubenswrapper[7476]: I0320 08:39:26.405798 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.406634 master-0 kubenswrapper[7476]: I0320 08:39:26.406574 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.421553 master-0 kubenswrapper[7476]: I0320 08:39:26.421489 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.472293 master-0 kubenswrapper[7476]: I0320 08:39:26.472150 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:26.949703 master-0 kubenswrapper[7476]: I0320 08:39:26.945641 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-55d84d7794-56n4c"] Mar 20 08:39:26.953661 master-0 kubenswrapper[7476]: W0320 08:39:26.953603 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04466971_127b_403e_af45_dad97b6e0c87.slice/crio-46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97 WatchSource:0}: Error finding container 46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97: Status 404 returned error can't find the container with id 46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97 Mar 20 08:39:27.016541 master-0 kubenswrapper[7476]: I0320 08:39:27.016468 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:27.016541 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:27.016541 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:27.016541 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:27.017106 master-0 kubenswrapper[7476]: I0320 08:39:27.016570 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:27.127236 master-0 kubenswrapper[7476]: I0320 08:39:27.127169 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" event={"ID":"04466971-127b-403e-af45-dad97b6e0c87","Type":"ContainerStarted","Data":"46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97"} Mar 20 08:39:28.016307 master-0 kubenswrapper[7476]: I0320 08:39:28.016161 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:28.016307 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:28.016307 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:28.016307 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:28.016799 master-0 kubenswrapper[7476]: I0320 08:39:28.016367 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:29.018462 master-0 kubenswrapper[7476]: I0320 08:39:29.018395 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:29.018462 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:29.018462 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:29.018462 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:29.019628 master-0 kubenswrapper[7476]: I0320 08:39:29.018480 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:30.016972 master-0 kubenswrapper[7476]: I0320 08:39:30.016899 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:30.016972 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:30.016972 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:30.016972 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:30.017236 master-0 kubenswrapper[7476]: I0320 08:39:30.016986 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:30.150508 master-0 kubenswrapper[7476]: I0320 08:39:30.150393 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" event={"ID":"04466971-127b-403e-af45-dad97b6e0c87","Type":"ContainerStarted","Data":"b842607819c12e2c961d4115971433a287618e424b9b5e836fdeed85d90e9244"} Mar 20 08:39:30.183212 master-0 kubenswrapper[7476]: I0320 08:39:30.183099 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" podStartSLOduration=2.109878296 podStartE2EDuration="4.183079349s" podCreationTimestamp="2026-03-20 08:39:26 +0000 UTC" firstStartedPulling="2026-03-20 08:39:26.95786481 +0000 UTC m=+247.926633346" lastFinishedPulling="2026-03-20 08:39:29.031065843 +0000 UTC m=+249.999834399" observedRunningTime="2026-03-20 08:39:30.180562838 +0000 UTC m=+251.149331444" watchObservedRunningTime="2026-03-20 08:39:30.183079349 +0000 UTC m=+251.151847885" Mar 20 08:39:31.019088 master-0 kubenswrapper[7476]: I0320 08:39:31.018968 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:31.019088 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:31.019088 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:31.019088 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:31.019631 master-0 kubenswrapper[7476]: I0320 08:39:31.019098 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:32.016403 master-0 kubenswrapper[7476]: I0320 08:39:32.016336 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:32.016403 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:32.016403 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:32.016403 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:32.017572 master-0 kubenswrapper[7476]: I0320 08:39:32.016417 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:33.017393 master-0 kubenswrapper[7476]: I0320 08:39:33.017324 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:33.017393 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:33.017393 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:33.017393 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:33.018618 master-0 kubenswrapper[7476]: I0320 08:39:33.017404 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:34.016354 master-0 kubenswrapper[7476]: I0320 08:39:34.016247 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:34.016354 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:34.016354 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:34.016354 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:34.016676 master-0 kubenswrapper[7476]: I0320 08:39:34.016373 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:35.017319 master-0 kubenswrapper[7476]: I0320 08:39:35.017227 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:35.017319 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:35.017319 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:35.017319 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:35.017830 master-0 kubenswrapper[7476]: I0320 08:39:35.017316 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:36.017375 master-0 kubenswrapper[7476]: I0320 08:39:36.017296 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:36.017375 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:36.017375 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:36.017375 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:36.017375 master-0 kubenswrapper[7476]: I0320 08:39:36.017371 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:37.017531 master-0 kubenswrapper[7476]: I0320 08:39:37.017460 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:37.017531 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:37.017531 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:37.017531 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:37.018343 master-0 kubenswrapper[7476]: I0320 08:39:37.017560 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:38.018647 master-0 kubenswrapper[7476]: I0320 08:39:38.018080 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:38.018647 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:38.018647 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:38.018647 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:38.018647 master-0 kubenswrapper[7476]: I0320 08:39:38.018153 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:39.017514 master-0 kubenswrapper[7476]: I0320 08:39:39.017411 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:39.017514 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:39.017514 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:39.017514 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:39.018354 master-0 kubenswrapper[7476]: I0320 08:39:39.017512 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:40.017168 master-0 kubenswrapper[7476]: I0320 08:39:40.017107 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:40.017168 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:40.017168 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:40.017168 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:40.017168 master-0 kubenswrapper[7476]: I0320 08:39:40.017168 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:41.017988 master-0 kubenswrapper[7476]: I0320 08:39:41.017857 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:41.017988 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:41.017988 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:41.017988 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:41.019141 master-0 kubenswrapper[7476]: I0320 08:39:41.017987 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:42.016785 master-0 kubenswrapper[7476]: I0320 08:39:42.016697 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:42.016785 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:42.016785 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:42.016785 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:42.017097 master-0 kubenswrapper[7476]: I0320 08:39:42.016801 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:43.017985 master-0 kubenswrapper[7476]: I0320 08:39:43.017874 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:43.017985 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:43.017985 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:43.017985 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:43.017985 master-0 kubenswrapper[7476]: I0320 08:39:43.017950 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:44.018171 master-0 kubenswrapper[7476]: I0320 08:39:44.018083 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:44.018171 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:44.018171 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:44.018171 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:44.019442 master-0 kubenswrapper[7476]: I0320 08:39:44.018196 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:45.018027 master-0 kubenswrapper[7476]: I0320 08:39:45.017947 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:45.018027 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:45.018027 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:45.018027 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:45.018608 master-0 kubenswrapper[7476]: I0320 08:39:45.018062 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:46.017470 master-0 kubenswrapper[7476]: I0320 08:39:46.017369 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:46.017470 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:46.017470 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:46.017470 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:46.018003 master-0 kubenswrapper[7476]: I0320 08:39:46.017474 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:46.473036 master-0 kubenswrapper[7476]: I0320 08:39:46.472860 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:46.473036 master-0 kubenswrapper[7476]: I0320 08:39:46.472985 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:39:47.019512 master-0 kubenswrapper[7476]: I0320 08:39:47.018595 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:47.019512 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:47.019512 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:47.019512 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:47.019512 master-0 kubenswrapper[7476]: I0320 08:39:47.018714 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:48.016935 master-0 kubenswrapper[7476]: I0320 08:39:48.016806 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:48.016935 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:48.016935 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:48.016935 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:48.017634 master-0 kubenswrapper[7476]: I0320 08:39:48.016957 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:49.017448 master-0 kubenswrapper[7476]: I0320 08:39:49.017363 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:49.017448 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:49.017448 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:49.017448 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:49.018525 master-0 kubenswrapper[7476]: I0320 08:39:49.017479 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:50.017189 master-0 kubenswrapper[7476]: I0320 08:39:50.017130 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:50.017189 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:50.017189 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:50.017189 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:50.018055 master-0 kubenswrapper[7476]: I0320 08:39:50.017218 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:51.017185 master-0 kubenswrapper[7476]: I0320 08:39:51.017107 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:51.017185 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:51.017185 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:51.017185 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:51.017582 master-0 kubenswrapper[7476]: I0320 08:39:51.017200 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:52.018317 master-0 kubenswrapper[7476]: I0320 08:39:52.018218 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:52.018317 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:52.018317 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:52.018317 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:52.018317 master-0 kubenswrapper[7476]: I0320 08:39:52.018313 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:53.016690 master-0 kubenswrapper[7476]: I0320 08:39:53.016618 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:53.016690 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:53.016690 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:53.016690 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:53.017217 master-0 kubenswrapper[7476]: I0320 08:39:53.016726 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:54.017187 master-0 kubenswrapper[7476]: I0320 08:39:54.017106 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:54.017187 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:54.017187 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:54.017187 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:54.017853 master-0 kubenswrapper[7476]: I0320 08:39:54.017235 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:55.017833 master-0 kubenswrapper[7476]: I0320 08:39:55.017682 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:55.017833 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:55.017833 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:55.017833 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:55.018910 master-0 kubenswrapper[7476]: I0320 08:39:55.017863 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:56.017600 master-0 kubenswrapper[7476]: I0320 08:39:56.017466 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:56.017600 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:56.017600 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:56.017600 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:56.018538 master-0 kubenswrapper[7476]: I0320 08:39:56.017644 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:57.017095 master-0 kubenswrapper[7476]: I0320 08:39:57.017018 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:57.017095 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:57.017095 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:57.017095 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:57.017440 master-0 kubenswrapper[7476]: I0320 08:39:57.017110 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:58.018084 master-0 kubenswrapper[7476]: I0320 08:39:58.018019 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:58.018084 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:58.018084 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:58.018084 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:58.018819 master-0 kubenswrapper[7476]: I0320 08:39:58.018096 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:39:59.016749 master-0 kubenswrapper[7476]: I0320 08:39:59.016653 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:39:59.016749 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:39:59.016749 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:39:59.016749 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:39:59.017142 master-0 kubenswrapper[7476]: I0320 08:39:59.016756 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:00.017144 master-0 kubenswrapper[7476]: I0320 08:40:00.017083 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:00.017144 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:00.017144 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:00.017144 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:00.017780 master-0 kubenswrapper[7476]: I0320 08:40:00.017169 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:01.016836 master-0 kubenswrapper[7476]: I0320 08:40:01.016740 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:01.016836 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:01.016836 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:01.016836 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:01.017733 master-0 kubenswrapper[7476]: I0320 08:40:01.016835 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:02.016141 master-0 kubenswrapper[7476]: I0320 08:40:02.016079 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:02.016141 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:02.016141 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:02.016141 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:02.016638 master-0 kubenswrapper[7476]: I0320 08:40:02.016141 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:03.017071 master-0 kubenswrapper[7476]: I0320 08:40:03.017000 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:03.017071 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:03.017071 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:03.017071 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:03.018129 master-0 kubenswrapper[7476]: I0320 08:40:03.017083 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:04.017647 master-0 kubenswrapper[7476]: I0320 08:40:04.017569 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:04.017647 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:04.017647 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:04.017647 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:04.018829 master-0 kubenswrapper[7476]: I0320 08:40:04.018498 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:05.017374 master-0 kubenswrapper[7476]: I0320 08:40:05.017306 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:05.017374 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:05.017374 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:05.017374 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:05.017374 master-0 kubenswrapper[7476]: I0320 08:40:05.017363 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:06.017781 master-0 kubenswrapper[7476]: I0320 08:40:06.017657 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:06.017781 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:06.017781 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:06.017781 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:06.018645 master-0 kubenswrapper[7476]: I0320 08:40:06.017797 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:06.480838 master-0 kubenswrapper[7476]: I0320 08:40:06.480737 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:40:06.489169 master-0 kubenswrapper[7476]: I0320 08:40:06.489081 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:40:07.020167 master-0 kubenswrapper[7476]: I0320 08:40:07.019645 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:07.020167 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:07.020167 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:07.020167 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:07.020167 master-0 kubenswrapper[7476]: I0320 08:40:07.019724 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:08.017318 master-0 kubenswrapper[7476]: I0320 08:40:08.017228 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:08.017318 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:08.017318 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:08.017318 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:08.018065 master-0 kubenswrapper[7476]: I0320 08:40:08.017338 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:09.017718 master-0 kubenswrapper[7476]: I0320 08:40:09.017600 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:09.017718 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:09.017718 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:09.017718 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:09.017718 master-0 kubenswrapper[7476]: I0320 08:40:09.017704 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:10.017662 master-0 kubenswrapper[7476]: I0320 08:40:10.017574 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:10.017662 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:10.017662 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:10.017662 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:10.018346 master-0 kubenswrapper[7476]: I0320 08:40:10.017678 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:11.016452 master-0 kubenswrapper[7476]: I0320 08:40:11.016366 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:11.016452 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:11.016452 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:11.016452 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:11.016899 master-0 kubenswrapper[7476]: I0320 08:40:11.016482 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:12.016793 master-0 kubenswrapper[7476]: I0320 08:40:12.016701 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:12.016793 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:12.016793 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:12.016793 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:12.017722 master-0 kubenswrapper[7476]: I0320 08:40:12.016790 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:13.016512 master-0 kubenswrapper[7476]: I0320 08:40:13.016427 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:13.016512 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:13.016512 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:13.016512 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:13.017509 master-0 kubenswrapper[7476]: I0320 08:40:13.016529 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:14.051188 master-0 kubenswrapper[7476]: I0320 08:40:14.051027 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:14.051188 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:14.051188 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:14.051188 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:14.051188 master-0 kubenswrapper[7476]: I0320 08:40:14.051108 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:15.018116 master-0 kubenswrapper[7476]: I0320 08:40:15.017904 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:15.018116 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:15.018116 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:15.018116 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:15.018116 master-0 kubenswrapper[7476]: I0320 08:40:15.017995 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:16.017330 master-0 kubenswrapper[7476]: I0320 08:40:16.017239 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:16.017330 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:16.017330 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:16.017330 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:16.018333 master-0 kubenswrapper[7476]: I0320 08:40:16.017338 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:17.017347 master-0 kubenswrapper[7476]: I0320 08:40:17.017210 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:17.017347 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:17.017347 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:17.017347 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:17.017347 master-0 kubenswrapper[7476]: I0320 08:40:17.017314 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:18.016243 master-0 kubenswrapper[7476]: I0320 08:40:18.016168 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:18.016243 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:18.016243 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:18.016243 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:18.016707 master-0 kubenswrapper[7476]: I0320 08:40:18.016255 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:19.016416 master-0 kubenswrapper[7476]: I0320 08:40:19.016370 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:19.016416 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:19.016416 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:19.016416 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:19.017050 master-0 kubenswrapper[7476]: I0320 08:40:19.017023 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:19.386928 master-0 kubenswrapper[7476]: I0320 08:40:19.386867 7476 scope.go:117] "RemoveContainer" containerID="cfd277b4fa13917f4d0cc04f7d6bdc6ea5d4df628ab0e4b86103cf26da62a23f" Mar 20 08:40:19.411609 master-0 kubenswrapper[7476]: I0320 08:40:19.411557 7476 scope.go:117] "RemoveContainer" containerID="c7aa165c0986788c15e1247a68719a95f704ec935f16e843c43124bc75fd9639" Mar 20 08:40:20.017147 master-0 kubenswrapper[7476]: I0320 08:40:20.017045 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:20.017147 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:20.017147 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:20.017147 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:20.017147 master-0 kubenswrapper[7476]: I0320 08:40:20.017137 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:21.017332 master-0 kubenswrapper[7476]: I0320 08:40:21.017205 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:21.017332 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:21.017332 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:21.017332 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:21.018451 master-0 kubenswrapper[7476]: I0320 08:40:21.017339 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:22.017482 master-0 kubenswrapper[7476]: I0320 08:40:22.017377 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:22.017482 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:22.017482 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:22.017482 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:22.018572 master-0 kubenswrapper[7476]: I0320 08:40:22.017511 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:23.018579 master-0 kubenswrapper[7476]: I0320 08:40:23.018452 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:23.018579 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:23.018579 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:23.018579 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:23.018579 master-0 kubenswrapper[7476]: I0320 08:40:23.018523 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:23.575376 master-0 kubenswrapper[7476]: I0320 08:40:23.575339 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/1.log" Mar 20 08:40:23.577402 master-0 kubenswrapper[7476]: I0320 08:40:23.577382 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/0.log" Mar 20 08:40:23.577573 master-0 kubenswrapper[7476]: I0320 08:40:23.577544 7476 generic.go:334] "Generic (PLEG): container finished" podID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" containerID="aa474287c12f1d850a021861c8d0d4c93567ca052c1e7dd4ff6b75e56d25a823" exitCode=1 Mar 20 08:40:23.577770 master-0 kubenswrapper[7476]: I0320 08:40:23.577664 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerDied","Data":"aa474287c12f1d850a021861c8d0d4c93567ca052c1e7dd4ff6b75e56d25a823"} Mar 20 08:40:23.577971 master-0 kubenswrapper[7476]: I0320 08:40:23.577925 7476 scope.go:117] "RemoveContainer" containerID="e11ba7fccf3e3a03d9b7498dc0eb1bc10a9a5dcbb92c598146672eeafb4b1b79" Mar 20 08:40:23.578494 master-0 kubenswrapper[7476]: I0320 08:40:23.578475 7476 scope.go:117] "RemoveContainer" containerID="aa474287c12f1d850a021861c8d0d4c93567ca052c1e7dd4ff6b75e56d25a823" Mar 20 08:40:23.579038 master-0 kubenswrapper[7476]: E0320 08:40:23.578991 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:40:24.018241 master-0 kubenswrapper[7476]: I0320 08:40:24.018157 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:24.018241 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:24.018241 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:24.018241 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:24.018241 master-0 kubenswrapper[7476]: I0320 08:40:24.018229 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:24.589388 master-0 kubenswrapper[7476]: I0320 08:40:24.589334 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/1.log" Mar 20 08:40:25.017057 master-0 kubenswrapper[7476]: I0320 08:40:25.016990 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:25.017057 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:25.017057 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:25.017057 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:25.017341 master-0 kubenswrapper[7476]: I0320 08:40:25.017094 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:26.019725 master-0 kubenswrapper[7476]: I0320 08:40:26.019667 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:26.019725 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:26.019725 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:26.019725 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:26.020812 master-0 kubenswrapper[7476]: I0320 08:40:26.020407 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:27.017381 master-0 kubenswrapper[7476]: I0320 08:40:27.017314 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:27.017381 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:27.017381 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:27.017381 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:27.018140 master-0 kubenswrapper[7476]: I0320 08:40:27.018079 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:28.016775 master-0 kubenswrapper[7476]: I0320 08:40:28.016690 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:28.016775 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:28.016775 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:28.016775 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:28.017589 master-0 kubenswrapper[7476]: I0320 08:40:28.016785 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:29.020055 master-0 kubenswrapper[7476]: I0320 08:40:29.020002 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:29.020055 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:29.020055 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:29.020055 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:29.020621 master-0 kubenswrapper[7476]: I0320 08:40:29.020065 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:30.016833 master-0 kubenswrapper[7476]: I0320 08:40:30.016765 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:30.016833 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:30.016833 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:30.016833 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:30.017461 master-0 kubenswrapper[7476]: I0320 08:40:30.016845 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:31.020239 master-0 kubenswrapper[7476]: I0320 08:40:31.020055 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:31.020239 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:31.020239 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:31.020239 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:31.021455 master-0 kubenswrapper[7476]: I0320 08:40:31.020450 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:32.019039 master-0 kubenswrapper[7476]: I0320 08:40:32.018946 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:32.019039 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:32.019039 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:32.019039 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:32.019583 master-0 kubenswrapper[7476]: I0320 08:40:32.019073 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:33.017461 master-0 kubenswrapper[7476]: I0320 08:40:33.017363 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:33.017461 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:33.017461 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:33.017461 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:33.017461 master-0 kubenswrapper[7476]: I0320 08:40:33.017458 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:34.017605 master-0 kubenswrapper[7476]: I0320 08:40:34.017530 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:34.017605 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:34.017605 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:34.017605 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:34.018694 master-0 kubenswrapper[7476]: I0320 08:40:34.017655 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:35.019822 master-0 kubenswrapper[7476]: I0320 08:40:35.019742 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:35.019822 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:35.019822 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:35.019822 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:35.020902 master-0 kubenswrapper[7476]: I0320 08:40:35.019836 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:35.237362 master-0 kubenswrapper[7476]: I0320 08:40:35.237253 7476 scope.go:117] "RemoveContainer" containerID="aa474287c12f1d850a021861c8d0d4c93567ca052c1e7dd4ff6b75e56d25a823" Mar 20 08:40:35.685341 master-0 kubenswrapper[7476]: I0320 08:40:35.684721 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/1.log" Mar 20 08:40:35.685748 master-0 kubenswrapper[7476]: I0320 08:40:35.685425 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2"} Mar 20 08:40:36.017576 master-0 kubenswrapper[7476]: I0320 08:40:36.017337 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:36.017576 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:36.017576 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:36.017576 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:36.017576 master-0 kubenswrapper[7476]: I0320 08:40:36.017464 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:37.019950 master-0 kubenswrapper[7476]: I0320 08:40:37.019841 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:37.019950 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:37.019950 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:37.019950 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:37.021190 master-0 kubenswrapper[7476]: I0320 08:40:37.019955 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:38.017614 master-0 kubenswrapper[7476]: I0320 08:40:38.017524 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:38.017614 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:38.017614 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:38.017614 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:38.017614 master-0 kubenswrapper[7476]: I0320 08:40:38.017597 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:39.017394 master-0 kubenswrapper[7476]: I0320 08:40:39.017333 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:39.017394 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:39.017394 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:39.017394 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:39.018026 master-0 kubenswrapper[7476]: I0320 08:40:39.017444 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:40.017432 master-0 kubenswrapper[7476]: I0320 08:40:40.017307 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:40.017432 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:40.017432 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:40.017432 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:40.018786 master-0 kubenswrapper[7476]: I0320 08:40:40.017456 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:41.028586 master-0 kubenswrapper[7476]: I0320 08:40:41.028442 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:41.028586 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:41.028586 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:41.028586 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:41.029884 master-0 kubenswrapper[7476]: I0320 08:40:41.028616 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:42.017285 master-0 kubenswrapper[7476]: I0320 08:40:42.017195 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:42.017285 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:42.017285 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:42.017285 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:42.017851 master-0 kubenswrapper[7476]: I0320 08:40:42.017810 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:43.017305 master-0 kubenswrapper[7476]: I0320 08:40:43.017155 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:43.017305 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:43.017305 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:43.017305 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:43.017305 master-0 kubenswrapper[7476]: I0320 08:40:43.017251 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:44.061477 master-0 kubenswrapper[7476]: I0320 08:40:44.060940 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:44.061477 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:44.061477 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:44.061477 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:44.061477 master-0 kubenswrapper[7476]: I0320 08:40:44.061001 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:45.019433 master-0 kubenswrapper[7476]: I0320 08:40:45.019321 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:45.019433 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:45.019433 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:45.019433 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:45.019433 master-0 kubenswrapper[7476]: I0320 08:40:45.019447 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:46.018145 master-0 kubenswrapper[7476]: I0320 08:40:46.018021 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:46.018145 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:46.018145 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:46.018145 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:46.018145 master-0 kubenswrapper[7476]: I0320 08:40:46.018137 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:47.017538 master-0 kubenswrapper[7476]: I0320 08:40:47.017465 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:47.017538 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:47.017538 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:47.017538 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:47.017888 master-0 kubenswrapper[7476]: I0320 08:40:47.017540 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:48.017788 master-0 kubenswrapper[7476]: I0320 08:40:48.017728 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:48.017788 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:48.017788 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:48.017788 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:48.018544 master-0 kubenswrapper[7476]: I0320 08:40:48.017798 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:49.017757 master-0 kubenswrapper[7476]: I0320 08:40:49.017676 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:49.017757 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:49.017757 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:49.017757 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:49.018822 master-0 kubenswrapper[7476]: I0320 08:40:49.017833 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:50.017037 master-0 kubenswrapper[7476]: I0320 08:40:50.016944 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:50.017037 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:50.017037 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:50.017037 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:50.017538 master-0 kubenswrapper[7476]: I0320 08:40:50.017059 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:51.017724 master-0 kubenswrapper[7476]: I0320 08:40:51.017653 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:51.017724 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:51.017724 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:51.017724 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:51.019237 master-0 kubenswrapper[7476]: I0320 08:40:51.017743 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:52.017823 master-0 kubenswrapper[7476]: I0320 08:40:52.017722 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:52.017823 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:52.017823 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:52.017823 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:52.019090 master-0 kubenswrapper[7476]: I0320 08:40:52.017829 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:53.019327 master-0 kubenswrapper[7476]: I0320 08:40:53.019225 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:53.019327 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:53.019327 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:53.019327 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:53.019327 master-0 kubenswrapper[7476]: I0320 08:40:53.019304 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:54.017581 master-0 kubenswrapper[7476]: I0320 08:40:54.017418 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:54.017581 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:54.017581 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:54.017581 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:54.017581 master-0 kubenswrapper[7476]: I0320 08:40:54.017568 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:55.016996 master-0 kubenswrapper[7476]: I0320 08:40:55.016880 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:55.016996 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:55.016996 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:55.016996 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:55.016996 master-0 kubenswrapper[7476]: I0320 08:40:55.016981 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:56.016473 master-0 kubenswrapper[7476]: I0320 08:40:56.016405 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:56.016473 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:56.016473 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:56.016473 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:56.016473 master-0 kubenswrapper[7476]: I0320 08:40:56.016467 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:57.017207 master-0 kubenswrapper[7476]: I0320 08:40:57.017137 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:57.017207 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:57.017207 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:57.017207 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:57.018075 master-0 kubenswrapper[7476]: I0320 08:40:57.017207 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:58.019576 master-0 kubenswrapper[7476]: I0320 08:40:58.019504 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:58.019576 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:58.019576 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:58.019576 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:58.020865 master-0 kubenswrapper[7476]: I0320 08:40:58.020806 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:40:59.017404 master-0 kubenswrapper[7476]: I0320 08:40:59.017327 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:40:59.017404 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:40:59.017404 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:40:59.017404 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:40:59.017873 master-0 kubenswrapper[7476]: I0320 08:40:59.017425 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:00.016829 master-0 kubenswrapper[7476]: I0320 08:41:00.016733 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:00.016829 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:00.016829 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:00.016829 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:00.017342 master-0 kubenswrapper[7476]: I0320 08:41:00.016862 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:01.018643 master-0 kubenswrapper[7476]: I0320 08:41:01.018556 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:01.018643 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:01.018643 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:01.018643 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:01.020648 master-0 kubenswrapper[7476]: I0320 08:41:01.018651 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:02.016365 master-0 kubenswrapper[7476]: I0320 08:41:02.016301 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:02.016365 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:02.016365 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:02.016365 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:02.016365 master-0 kubenswrapper[7476]: I0320 08:41:02.016355 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:03.018509 master-0 kubenswrapper[7476]: I0320 08:41:03.018428 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:03.018509 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:03.018509 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:03.018509 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:03.018509 master-0 kubenswrapper[7476]: I0320 08:41:03.018503 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:04.020897 master-0 kubenswrapper[7476]: I0320 08:41:04.020792 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:04.020897 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:04.020897 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:04.020897 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:04.020897 master-0 kubenswrapper[7476]: I0320 08:41:04.020885 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:05.017873 master-0 kubenswrapper[7476]: I0320 08:41:05.017814 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:05.017873 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:05.017873 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:05.017873 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:05.018452 master-0 kubenswrapper[7476]: I0320 08:41:05.018405 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:06.017771 master-0 kubenswrapper[7476]: I0320 08:41:06.017678 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:06.017771 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:06.017771 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:06.017771 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:06.019091 master-0 kubenswrapper[7476]: I0320 08:41:06.017781 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:07.018033 master-0 kubenswrapper[7476]: I0320 08:41:07.017895 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:07.018033 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:07.018033 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:07.018033 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:07.018033 master-0 kubenswrapper[7476]: I0320 08:41:07.018010 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:08.019613 master-0 kubenswrapper[7476]: I0320 08:41:08.019512 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:08.019613 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:08.019613 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:08.019613 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:08.019613 master-0 kubenswrapper[7476]: I0320 08:41:08.019599 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:09.016558 master-0 kubenswrapper[7476]: I0320 08:41:09.016471 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:09.016558 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:09.016558 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:09.016558 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:09.016558 master-0 kubenswrapper[7476]: I0320 08:41:09.016550 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:10.017454 master-0 kubenswrapper[7476]: I0320 08:41:10.017375 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:10.017454 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:10.017454 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:10.017454 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:10.018477 master-0 kubenswrapper[7476]: I0320 08:41:10.017473 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:11.018288 master-0 kubenswrapper[7476]: I0320 08:41:11.017540 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:11.018288 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:11.018288 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:11.018288 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:11.018288 master-0 kubenswrapper[7476]: I0320 08:41:11.017682 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:12.017519 master-0 kubenswrapper[7476]: I0320 08:41:12.017445 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:12.017519 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:12.017519 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:12.017519 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:12.017868 master-0 kubenswrapper[7476]: I0320 08:41:12.017549 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:13.020144 master-0 kubenswrapper[7476]: I0320 08:41:13.020036 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:13.020144 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:13.020144 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:13.020144 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:13.021236 master-0 kubenswrapper[7476]: I0320 08:41:13.020158 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:14.016658 master-0 kubenswrapper[7476]: I0320 08:41:14.016581 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:41:14.016658 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:41:14.016658 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:41:14.016658 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:41:14.016939 master-0 kubenswrapper[7476]: I0320 08:41:14.016695 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:41:14.016939 master-0 kubenswrapper[7476]: I0320 08:41:14.016767 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:41:14.018061 master-0 kubenswrapper[7476]: I0320 08:41:14.018013 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"9f3b47575a455c1af61754677babc355c6032015c14d444d604fb6bbfbe54a24"} pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" containerMessage="Container router failed startup probe, will be restarted" Mar 20 08:41:14.018129 master-0 kubenswrapper[7476]: I0320 08:41:14.018082 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" containerID="cri-o://9f3b47575a455c1af61754677babc355c6032015c14d444d604fb6bbfbe54a24" gracePeriod=3600 Mar 20 08:42:00.877067 master-0 kubenswrapper[7476]: I0320 08:42:00.876999 7476 generic.go:334] "Generic (PLEG): container finished" podID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerID="9f3b47575a455c1af61754677babc355c6032015c14d444d604fb6bbfbe54a24" exitCode=0 Mar 20 08:42:00.877067 master-0 kubenswrapper[7476]: I0320 08:42:00.877042 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerDied","Data":"9f3b47575a455c1af61754677babc355c6032015c14d444d604fb6bbfbe54a24"} Mar 20 08:42:00.877669 master-0 kubenswrapper[7476]: I0320 08:42:00.877101 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"f6fca13f29777c3e581624d7e050cfe207017354f2d3e38a35c450e9f709ea25"} Mar 20 08:42:01.015091 master-0 kubenswrapper[7476]: I0320 08:42:01.014938 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:42:01.019485 master-0 kubenswrapper[7476]: I0320 08:42:01.019363 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:01.019485 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:01.019485 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:01.019485 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:01.020051 master-0 kubenswrapper[7476]: I0320 08:42:01.019496 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:02.014399 master-0 kubenswrapper[7476]: I0320 08:42:02.014333 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:42:02.017496 master-0 kubenswrapper[7476]: I0320 08:42:02.017432 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:02.017496 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:02.017496 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:02.017496 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:02.017804 master-0 kubenswrapper[7476]: I0320 08:42:02.017531 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:03.017614 master-0 kubenswrapper[7476]: I0320 08:42:03.017473 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:03.017614 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:03.017614 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:03.017614 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:03.018785 master-0 kubenswrapper[7476]: I0320 08:42:03.017652 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:04.016730 master-0 kubenswrapper[7476]: I0320 08:42:04.016643 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:04.016730 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:04.016730 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:04.016730 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:04.016730 master-0 kubenswrapper[7476]: I0320 08:42:04.016723 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:05.016994 master-0 kubenswrapper[7476]: I0320 08:42:05.016907 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:05.016994 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:05.016994 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:05.016994 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:05.018172 master-0 kubenswrapper[7476]: I0320 08:42:05.016994 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:06.016952 master-0 kubenswrapper[7476]: I0320 08:42:06.016876 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:06.016952 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:06.016952 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:06.016952 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:06.018172 master-0 kubenswrapper[7476]: I0320 08:42:06.018122 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:07.019767 master-0 kubenswrapper[7476]: I0320 08:42:07.019210 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:07.019767 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:07.019767 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:07.019767 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:07.019767 master-0 kubenswrapper[7476]: I0320 08:42:07.019369 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:08.020165 master-0 kubenswrapper[7476]: I0320 08:42:08.020049 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:08.020165 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:08.020165 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:08.020165 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:08.021922 master-0 kubenswrapper[7476]: I0320 08:42:08.021869 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:09.017896 master-0 kubenswrapper[7476]: I0320 08:42:09.017773 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:09.017896 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:09.017896 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:09.017896 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:09.017896 master-0 kubenswrapper[7476]: I0320 08:42:09.017875 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:10.018025 master-0 kubenswrapper[7476]: I0320 08:42:10.017904 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:10.018025 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:10.018025 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:10.018025 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:10.019190 master-0 kubenswrapper[7476]: I0320 08:42:10.018014 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:11.018601 master-0 kubenswrapper[7476]: I0320 08:42:11.018517 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:11.018601 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:11.018601 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:11.018601 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:11.018601 master-0 kubenswrapper[7476]: I0320 08:42:11.018580 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:12.017346 master-0 kubenswrapper[7476]: I0320 08:42:12.017206 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:12.017346 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:12.017346 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:12.017346 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:12.017827 master-0 kubenswrapper[7476]: I0320 08:42:12.017396 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:13.018777 master-0 kubenswrapper[7476]: I0320 08:42:13.018635 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:13.018777 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:13.018777 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:13.018777 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:13.018777 master-0 kubenswrapper[7476]: I0320 08:42:13.018761 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:14.020642 master-0 kubenswrapper[7476]: I0320 08:42:14.019998 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:14.020642 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:14.020642 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:14.020642 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:14.020642 master-0 kubenswrapper[7476]: I0320 08:42:14.020107 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:15.016650 master-0 kubenswrapper[7476]: I0320 08:42:15.016468 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:15.016650 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:15.016650 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:15.016650 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:15.016650 master-0 kubenswrapper[7476]: I0320 08:42:15.016582 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:16.016891 master-0 kubenswrapper[7476]: I0320 08:42:16.016731 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:16.016891 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:16.016891 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:16.016891 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:16.016891 master-0 kubenswrapper[7476]: I0320 08:42:16.016874 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:17.017144 master-0 kubenswrapper[7476]: I0320 08:42:17.017046 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:17.017144 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:17.017144 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:17.017144 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:17.017952 master-0 kubenswrapper[7476]: I0320 08:42:17.017144 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:18.017761 master-0 kubenswrapper[7476]: I0320 08:42:18.017654 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:18.017761 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:18.017761 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:18.017761 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:18.017761 master-0 kubenswrapper[7476]: I0320 08:42:18.017758 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:19.019193 master-0 kubenswrapper[7476]: I0320 08:42:19.019089 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:19.019193 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:19.019193 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:19.019193 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:19.019193 master-0 kubenswrapper[7476]: I0320 08:42:19.019206 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:19.472584 master-0 kubenswrapper[7476]: I0320 08:42:19.472514 7476 scope.go:117] "RemoveContainer" containerID="b9956da416bbab1bdee494776bdb27eb3ac95a887e77cad24c8e769254d76bb0" Mar 20 08:42:20.017346 master-0 kubenswrapper[7476]: I0320 08:42:20.017260 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:20.017346 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:20.017346 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:20.017346 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:20.017346 master-0 kubenswrapper[7476]: I0320 08:42:20.017349 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:21.018626 master-0 kubenswrapper[7476]: I0320 08:42:21.018522 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:21.018626 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:21.018626 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:21.018626 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:21.019709 master-0 kubenswrapper[7476]: I0320 08:42:21.018643 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:22.018045 master-0 kubenswrapper[7476]: I0320 08:42:22.017931 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:22.018045 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:22.018045 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:22.018045 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:22.018045 master-0 kubenswrapper[7476]: I0320 08:42:22.018017 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:23.017499 master-0 kubenswrapper[7476]: I0320 08:42:23.017397 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:23.017499 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:23.017499 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:23.017499 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:23.018885 master-0 kubenswrapper[7476]: I0320 08:42:23.017522 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:24.017589 master-0 kubenswrapper[7476]: I0320 08:42:24.017449 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:24.017589 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:24.017589 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:24.017589 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:24.018693 master-0 kubenswrapper[7476]: I0320 08:42:24.017660 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:25.017575 master-0 kubenswrapper[7476]: I0320 08:42:25.017454 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:25.017575 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:25.017575 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:25.017575 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:25.018732 master-0 kubenswrapper[7476]: I0320 08:42:25.017577 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:26.019310 master-0 kubenswrapper[7476]: I0320 08:42:26.019195 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:26.019310 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:26.019310 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:26.019310 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:26.020653 master-0 kubenswrapper[7476]: I0320 08:42:26.019330 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:27.017298 master-0 kubenswrapper[7476]: I0320 08:42:27.017170 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:27.017298 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:27.017298 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:27.017298 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:27.018527 master-0 kubenswrapper[7476]: I0320 08:42:27.017346 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:28.017956 master-0 kubenswrapper[7476]: I0320 08:42:28.017848 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:28.017956 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:28.017956 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:28.017956 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:28.017956 master-0 kubenswrapper[7476]: I0320 08:42:28.017939 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:29.017172 master-0 kubenswrapper[7476]: I0320 08:42:29.017080 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:29.017172 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:29.017172 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:29.017172 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:29.017774 master-0 kubenswrapper[7476]: I0320 08:42:29.017172 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:30.019031 master-0 kubenswrapper[7476]: I0320 08:42:30.018954 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:30.019031 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:30.019031 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:30.019031 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:30.019654 master-0 kubenswrapper[7476]: I0320 08:42:30.019053 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:31.017427 master-0 kubenswrapper[7476]: I0320 08:42:31.017307 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:31.017427 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:31.017427 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:31.017427 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:31.018205 master-0 kubenswrapper[7476]: I0320 08:42:31.017459 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:32.017521 master-0 kubenswrapper[7476]: I0320 08:42:32.017437 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:32.017521 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:32.017521 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:32.017521 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:32.018514 master-0 kubenswrapper[7476]: I0320 08:42:32.017531 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:33.017244 master-0 kubenswrapper[7476]: I0320 08:42:33.017168 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:33.017244 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:33.017244 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:33.017244 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:33.018206 master-0 kubenswrapper[7476]: I0320 08:42:33.017323 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:34.018936 master-0 kubenswrapper[7476]: I0320 08:42:34.018563 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:34.018936 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:34.018936 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:34.018936 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:34.018936 master-0 kubenswrapper[7476]: I0320 08:42:34.018667 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:35.018457 master-0 kubenswrapper[7476]: I0320 08:42:35.018322 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:35.018457 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:35.018457 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:35.018457 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:35.018457 master-0 kubenswrapper[7476]: I0320 08:42:35.018414 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:36.017498 master-0 kubenswrapper[7476]: I0320 08:42:36.017414 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:36.017498 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:36.017498 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:36.017498 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:36.017827 master-0 kubenswrapper[7476]: I0320 08:42:36.017524 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:37.017397 master-0 kubenswrapper[7476]: I0320 08:42:37.017250 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:37.017397 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:37.017397 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:37.017397 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:37.018575 master-0 kubenswrapper[7476]: I0320 08:42:37.017398 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:37.186465 master-0 kubenswrapper[7476]: I0320 08:42:37.186416 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/2.log" Mar 20 08:42:37.188478 master-0 kubenswrapper[7476]: I0320 08:42:37.188449 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/1.log" Mar 20 08:42:37.189175 master-0 kubenswrapper[7476]: I0320 08:42:37.189126 7476 generic.go:334] "Generic (PLEG): container finished" podID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" containerID="7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2" exitCode=1 Mar 20 08:42:37.189445 master-0 kubenswrapper[7476]: I0320 08:42:37.189220 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerDied","Data":"7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2"} Mar 20 08:42:37.189644 master-0 kubenswrapper[7476]: I0320 08:42:37.189620 7476 scope.go:117] "RemoveContainer" containerID="aa474287c12f1d850a021861c8d0d4c93567ca052c1e7dd4ff6b75e56d25a823" Mar 20 08:42:37.190621 master-0 kubenswrapper[7476]: I0320 08:42:37.190544 7476 scope.go:117] "RemoveContainer" containerID="7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2" Mar 20 08:42:37.191118 master-0 kubenswrapper[7476]: E0320 08:42:37.191037 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:42:38.017801 master-0 kubenswrapper[7476]: I0320 08:42:38.017720 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:38.017801 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:38.017801 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:38.017801 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:38.018825 master-0 kubenswrapper[7476]: I0320 08:42:38.017817 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:38.200399 master-0 kubenswrapper[7476]: I0320 08:42:38.200320 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/2.log" Mar 20 08:42:39.017807 master-0 kubenswrapper[7476]: I0320 08:42:39.017722 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:39.017807 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:39.017807 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:39.017807 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:39.019036 master-0 kubenswrapper[7476]: I0320 08:42:39.017821 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:40.016940 master-0 kubenswrapper[7476]: I0320 08:42:40.016871 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:40.016940 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:40.016940 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:40.016940 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:40.017623 master-0 kubenswrapper[7476]: I0320 08:42:40.017575 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:41.017873 master-0 kubenswrapper[7476]: I0320 08:42:41.017784 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:41.017873 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:41.017873 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:41.017873 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:41.019228 master-0 kubenswrapper[7476]: I0320 08:42:41.017910 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:42.017345 master-0 kubenswrapper[7476]: I0320 08:42:42.017209 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:42.017345 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:42.017345 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:42.017345 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:42.017828 master-0 kubenswrapper[7476]: I0320 08:42:42.017352 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:43.017074 master-0 kubenswrapper[7476]: I0320 08:42:43.016997 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:43.017074 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:43.017074 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:43.017074 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:43.018059 master-0 kubenswrapper[7476]: I0320 08:42:43.017090 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:44.019603 master-0 kubenswrapper[7476]: I0320 08:42:44.019507 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:44.019603 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:44.019603 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:44.019603 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:44.034457 master-0 kubenswrapper[7476]: I0320 08:42:44.019615 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:45.017205 master-0 kubenswrapper[7476]: I0320 08:42:45.017124 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:45.017205 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:45.017205 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:45.017205 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:45.017811 master-0 kubenswrapper[7476]: I0320 08:42:45.017210 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:46.017838 master-0 kubenswrapper[7476]: I0320 08:42:46.017738 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:46.017838 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:46.017838 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:46.017838 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:46.021688 master-0 kubenswrapper[7476]: I0320 08:42:46.017837 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:47.017636 master-0 kubenswrapper[7476]: I0320 08:42:47.017536 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:47.017636 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:47.017636 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:47.017636 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:47.018496 master-0 kubenswrapper[7476]: I0320 08:42:47.017656 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:48.016835 master-0 kubenswrapper[7476]: I0320 08:42:48.016716 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:48.016835 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:48.016835 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:48.016835 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:48.017251 master-0 kubenswrapper[7476]: I0320 08:42:48.016830 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:49.019311 master-0 kubenswrapper[7476]: I0320 08:42:49.019175 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:49.019311 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:49.019311 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:49.019311 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:49.019311 master-0 kubenswrapper[7476]: I0320 08:42:49.019259 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:49.242989 master-0 kubenswrapper[7476]: I0320 08:42:49.242889 7476 scope.go:117] "RemoveContainer" containerID="7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2" Mar 20 08:42:49.244516 master-0 kubenswrapper[7476]: E0320 08:42:49.244446 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:42:50.017953 master-0 kubenswrapper[7476]: I0320 08:42:50.017872 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:50.017953 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:50.017953 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:50.017953 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:50.018569 master-0 kubenswrapper[7476]: I0320 08:42:50.017978 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:51.018021 master-0 kubenswrapper[7476]: I0320 08:42:51.017948 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:51.018021 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:51.018021 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:51.018021 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:51.018021 master-0 kubenswrapper[7476]: I0320 08:42:51.018030 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:52.017863 master-0 kubenswrapper[7476]: I0320 08:42:52.017767 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:52.017863 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:52.017863 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:52.017863 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:52.017863 master-0 kubenswrapper[7476]: I0320 08:42:52.017855 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:53.017329 master-0 kubenswrapper[7476]: I0320 08:42:53.017152 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:53.017329 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:53.017329 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:53.017329 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:53.018528 master-0 kubenswrapper[7476]: I0320 08:42:53.018409 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:54.017478 master-0 kubenswrapper[7476]: I0320 08:42:54.017365 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:54.017478 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:54.017478 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:54.017478 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:54.019881 master-0 kubenswrapper[7476]: I0320 08:42:54.017513 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:55.018155 master-0 kubenswrapper[7476]: I0320 08:42:55.018050 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:55.018155 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:55.018155 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:55.018155 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:55.018653 master-0 kubenswrapper[7476]: I0320 08:42:55.018155 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:56.017442 master-0 kubenswrapper[7476]: I0320 08:42:56.017230 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:56.017442 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:56.017442 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:56.017442 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:56.017442 master-0 kubenswrapper[7476]: I0320 08:42:56.017358 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:57.018467 master-0 kubenswrapper[7476]: I0320 08:42:57.018392 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:57.018467 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:57.018467 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:57.018467 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:57.019661 master-0 kubenswrapper[7476]: I0320 08:42:57.018496 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:58.017120 master-0 kubenswrapper[7476]: I0320 08:42:58.017029 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:58.017120 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:58.017120 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:58.017120 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:58.017683 master-0 kubenswrapper[7476]: I0320 08:42:58.017127 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:42:59.017565 master-0 kubenswrapper[7476]: I0320 08:42:59.017479 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:42:59.017565 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:42:59.017565 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:42:59.017565 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:42:59.018401 master-0 kubenswrapper[7476]: I0320 08:42:59.017563 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:00.017592 master-0 kubenswrapper[7476]: I0320 08:43:00.017494 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:00.017592 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:00.017592 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:00.017592 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:00.018861 master-0 kubenswrapper[7476]: I0320 08:43:00.017600 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:01.016515 master-0 kubenswrapper[7476]: I0320 08:43:01.016443 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:01.016515 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:01.016515 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:01.016515 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:01.016942 master-0 kubenswrapper[7476]: I0320 08:43:01.016518 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:01.237582 master-0 kubenswrapper[7476]: I0320 08:43:01.237522 7476 scope.go:117] "RemoveContainer" containerID="7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2" Mar 20 08:43:01.407892 master-0 kubenswrapper[7476]: I0320 08:43:01.407843 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/2.log" Mar 20 08:43:02.015963 master-0 kubenswrapper[7476]: I0320 08:43:02.015903 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:02.015963 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:02.015963 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:02.015963 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:02.016404 master-0 kubenswrapper[7476]: I0320 08:43:02.015983 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:02.415946 master-0 kubenswrapper[7476]: I0320 08:43:02.415864 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/2.log" Mar 20 08:43:02.417529 master-0 kubenswrapper[7476]: I0320 08:43:02.417471 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77"} Mar 20 08:43:03.017516 master-0 kubenswrapper[7476]: I0320 08:43:03.017379 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:03.017516 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:03.017516 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:03.017516 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:03.017516 master-0 kubenswrapper[7476]: I0320 08:43:03.017494 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:04.016434 master-0 kubenswrapper[7476]: I0320 08:43:04.016340 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:04.016434 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:04.016434 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:04.016434 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:04.017405 master-0 kubenswrapper[7476]: I0320 08:43:04.016442 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:05.017656 master-0 kubenswrapper[7476]: I0320 08:43:05.017594 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:05.017656 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:05.017656 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:05.017656 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:05.018983 master-0 kubenswrapper[7476]: I0320 08:43:05.017670 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:06.017312 master-0 kubenswrapper[7476]: I0320 08:43:06.017219 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:06.017312 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:06.017312 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:06.017312 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:06.017724 master-0 kubenswrapper[7476]: I0320 08:43:06.017338 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:07.017108 master-0 kubenswrapper[7476]: I0320 08:43:07.017041 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:07.017108 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:07.017108 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:07.017108 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:07.017464 master-0 kubenswrapper[7476]: I0320 08:43:07.017108 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:08.016543 master-0 kubenswrapper[7476]: I0320 08:43:08.016425 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:08.016543 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:08.016543 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:08.016543 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:08.017681 master-0 kubenswrapper[7476]: I0320 08:43:08.016572 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:09.017258 master-0 kubenswrapper[7476]: I0320 08:43:09.017160 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:09.017258 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:09.017258 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:09.017258 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:09.017258 master-0 kubenswrapper[7476]: I0320 08:43:09.017252 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:10.016655 master-0 kubenswrapper[7476]: I0320 08:43:10.016544 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:10.016655 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:10.016655 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:10.016655 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:10.016973 master-0 kubenswrapper[7476]: I0320 08:43:10.016734 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:11.018162 master-0 kubenswrapper[7476]: I0320 08:43:11.017936 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:11.018162 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:11.018162 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:11.018162 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:11.018162 master-0 kubenswrapper[7476]: I0320 08:43:11.018035 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:12.020042 master-0 kubenswrapper[7476]: I0320 08:43:12.019960 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:12.020042 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:12.020042 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:12.020042 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:12.020042 master-0 kubenswrapper[7476]: I0320 08:43:12.020018 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:13.018229 master-0 kubenswrapper[7476]: I0320 08:43:13.018132 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:13.018229 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:13.018229 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:13.018229 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:13.018635 master-0 kubenswrapper[7476]: I0320 08:43:13.018247 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:14.019717 master-0 kubenswrapper[7476]: I0320 08:43:14.019082 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:14.019717 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:14.019717 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:14.019717 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:14.019717 master-0 kubenswrapper[7476]: I0320 08:43:14.019177 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:15.017753 master-0 kubenswrapper[7476]: I0320 08:43:15.017478 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:15.017753 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:15.017753 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:15.017753 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:15.017753 master-0 kubenswrapper[7476]: I0320 08:43:15.017614 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:16.008489 master-0 kubenswrapper[7476]: I0320 08:43:16.008349 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 20 08:43:16.010077 master-0 kubenswrapper[7476]: I0320 08:43:16.009883 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.012807 master-0 kubenswrapper[7476]: I0320 08:43:16.012754 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-8hjmb" Mar 20 08:43:16.013461 master-0 kubenswrapper[7476]: I0320 08:43:16.013425 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 20 08:43:16.022139 master-0 kubenswrapper[7476]: I0320 08:43:16.022064 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 20 08:43:16.082572 master-0 kubenswrapper[7476]: I0320 08:43:16.082484 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:16.082572 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:16.082572 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:16.082572 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:16.082966 master-0 kubenswrapper[7476]: I0320 08:43:16.082621 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:16.110371 master-0 kubenswrapper[7476]: I0320 08:43:16.100523 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26923e70-56a5-4020-8b55-510879ec6fd4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.110371 master-0 kubenswrapper[7476]: I0320 08:43:16.100652 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.110371 master-0 kubenswrapper[7476]: I0320 08:43:16.100748 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-var-lock\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.204158 master-0 kubenswrapper[7476]: I0320 08:43:16.202537 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.204158 master-0 kubenswrapper[7476]: I0320 08:43:16.202698 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-var-lock\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.204158 master-0 kubenswrapper[7476]: I0320 08:43:16.202817 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26923e70-56a5-4020-8b55-510879ec6fd4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.204158 master-0 kubenswrapper[7476]: I0320 08:43:16.203498 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.204158 master-0 kubenswrapper[7476]: I0320 08:43:16.203919 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-var-lock\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.230374 master-0 kubenswrapper[7476]: I0320 08:43:16.227930 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26923e70-56a5-4020-8b55-510879ec6fd4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.409820 master-0 kubenswrapper[7476]: I0320 08:43:16.409751 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 20 08:43:16.888154 master-0 kubenswrapper[7476]: I0320 08:43:16.888097 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 20 08:43:16.891848 master-0 kubenswrapper[7476]: W0320 08:43:16.891476 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod26923e70_56a5_4020_8b55_510879ec6fd4.slice/crio-c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc WatchSource:0}: Error finding container c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc: Status 404 returned error can't find the container with id c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc Mar 20 08:43:17.016153 master-0 kubenswrapper[7476]: I0320 08:43:17.016097 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:17.016153 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:17.016153 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:17.016153 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:17.016700 master-0 kubenswrapper[7476]: I0320 08:43:17.016155 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:17.545431 master-0 kubenswrapper[7476]: I0320 08:43:17.545241 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"26923e70-56a5-4020-8b55-510879ec6fd4","Type":"ContainerStarted","Data":"4efa2d7ff0f9f10f26d4d217feeb2ea6ecccefb675bc71c18faa7c5fe6db33c6"} Mar 20 08:43:17.545431 master-0 kubenswrapper[7476]: I0320 08:43:17.545340 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"26923e70-56a5-4020-8b55-510879ec6fd4","Type":"ContainerStarted","Data":"c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc"} Mar 20 08:43:17.571460 master-0 kubenswrapper[7476]: I0320 08:43:17.571116 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.571096254 podStartE2EDuration="2.571096254s" podCreationTimestamp="2026-03-20 08:43:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:17.569864321 +0000 UTC m=+478.538632857" watchObservedRunningTime="2026-03-20 08:43:17.571096254 +0000 UTC m=+478.539864780" Mar 20 08:43:18.017619 master-0 kubenswrapper[7476]: I0320 08:43:18.017535 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:18.017619 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:18.017619 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:18.017619 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:18.018334 master-0 kubenswrapper[7476]: I0320 08:43:18.017620 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:19.016636 master-0 kubenswrapper[7476]: I0320 08:43:19.016552 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:19.016636 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:19.016636 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:19.016636 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:19.016636 master-0 kubenswrapper[7476]: I0320 08:43:19.016628 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:19.502400 master-0 kubenswrapper[7476]: I0320 08:43:19.502237 7476 scope.go:117] "RemoveContainer" containerID="533ca83f2f1cbe90843aea19e67a25f7a7f9cb27edbc66c29caae3aa94a291f5" Mar 20 08:43:19.534674 master-0 kubenswrapper[7476]: I0320 08:43:19.534607 7476 scope.go:117] "RemoveContainer" containerID="fddad6fba182f96b236344babb403bec4283b752ab6cd93abdc1905a34daa41f" Mar 20 08:43:20.017558 master-0 kubenswrapper[7476]: I0320 08:43:20.017488 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:20.017558 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:20.017558 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:20.017558 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:20.017558 master-0 kubenswrapper[7476]: I0320 08:43:20.017562 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:21.018685 master-0 kubenswrapper[7476]: I0320 08:43:21.018564 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:21.018685 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:21.018685 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:21.018685 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:21.018685 master-0 kubenswrapper[7476]: I0320 08:43:21.018665 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:22.018179 master-0 kubenswrapper[7476]: I0320 08:43:22.018092 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:22.018179 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:22.018179 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:22.018179 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:22.018642 master-0 kubenswrapper[7476]: I0320 08:43:22.018220 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:23.016827 master-0 kubenswrapper[7476]: I0320 08:43:23.016773 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:23.016827 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:23.016827 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:23.016827 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:23.017501 master-0 kubenswrapper[7476]: I0320 08:43:23.016844 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:24.015877 master-0 kubenswrapper[7476]: I0320 08:43:24.015825 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:24.015877 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:24.015877 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:24.015877 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:24.016315 master-0 kubenswrapper[7476]: I0320 08:43:24.015888 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:25.017240 master-0 kubenswrapper[7476]: I0320 08:43:25.017156 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:25.017240 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:25.017240 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:25.017240 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:25.017806 master-0 kubenswrapper[7476]: I0320 08:43:25.017320 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:26.015477 master-0 kubenswrapper[7476]: I0320 08:43:26.015410 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:26.015477 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:26.015477 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:26.015477 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:26.015740 master-0 kubenswrapper[7476]: I0320 08:43:26.015506 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:27.016778 master-0 kubenswrapper[7476]: I0320 08:43:27.016711 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:27.016778 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:27.016778 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:27.016778 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:27.017605 master-0 kubenswrapper[7476]: I0320 08:43:27.016779 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:28.017329 master-0 kubenswrapper[7476]: I0320 08:43:28.017221 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:28.017329 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:28.017329 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:28.017329 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:28.019082 master-0 kubenswrapper[7476]: I0320 08:43:28.017353 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:29.020210 master-0 kubenswrapper[7476]: I0320 08:43:29.018758 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:29.020210 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:29.020210 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:29.020210 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:29.020210 master-0 kubenswrapper[7476]: I0320 08:43:29.018881 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:29.392912 master-0 kubenswrapper[7476]: I0320 08:43:29.392865 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 20 08:43:29.393649 master-0 kubenswrapper[7476]: I0320 08:43:29.393620 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.396458 master-0 kubenswrapper[7476]: I0320 08:43:29.396407 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-r4xv4" Mar 20 08:43:29.403091 master-0 kubenswrapper[7476]: I0320 08:43:29.403043 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 08:43:29.411344 master-0 kubenswrapper[7476]: I0320 08:43:29.411240 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 20 08:43:29.593200 master-0 kubenswrapper[7476]: I0320 08:43:29.593138 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.593561 master-0 kubenswrapper[7476]: I0320 08:43:29.593505 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-var-lock\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.593633 master-0 kubenswrapper[7476]: I0320 08:43:29.593607 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fae0c983-2cb4-4749-97ff-a718a9fb6563-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.698410 master-0 kubenswrapper[7476]: I0320 08:43:29.697243 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.698410 master-0 kubenswrapper[7476]: I0320 08:43:29.697401 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-var-lock\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.698410 master-0 kubenswrapper[7476]: I0320 08:43:29.697455 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fae0c983-2cb4-4749-97ff-a718a9fb6563-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.698410 master-0 kubenswrapper[7476]: I0320 08:43:29.697401 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.698410 master-0 kubenswrapper[7476]: I0320 08:43:29.697541 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-var-lock\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.758350 master-0 kubenswrapper[7476]: I0320 08:43:29.757083 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fae0c983-2cb4-4749-97ff-a718a9fb6563-kube-api-access\") pod \"installer-3-master-0\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:29.769013 master-0 kubenswrapper[7476]: I0320 08:43:29.768963 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:43:30.015994 master-0 kubenswrapper[7476]: I0320 08:43:30.015868 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:30.015994 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:30.015994 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:30.015994 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:30.016684 master-0 kubenswrapper[7476]: I0320 08:43:30.016376 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:30.199644 master-0 kubenswrapper[7476]: I0320 08:43:30.199566 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 20 08:43:30.650781 master-0 kubenswrapper[7476]: I0320 08:43:30.650708 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"fae0c983-2cb4-4749-97ff-a718a9fb6563","Type":"ContainerStarted","Data":"8db9b6351ac69b67c8e87136c1df3fa9a0513a97038d7ea0f58a226f57e933df"} Mar 20 08:43:30.650781 master-0 kubenswrapper[7476]: I0320 08:43:30.650763 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"fae0c983-2cb4-4749-97ff-a718a9fb6563","Type":"ContainerStarted","Data":"4a544ba88b612fcc7b9a0c05b171f124d77f9977d6164c6ef4949c3839565381"} Mar 20 08:43:30.675483 master-0 kubenswrapper[7476]: I0320 08:43:30.675409 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=1.675386952 podStartE2EDuration="1.675386952s" podCreationTimestamp="2026-03-20 08:43:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:30.672637037 +0000 UTC m=+491.641405613" watchObservedRunningTime="2026-03-20 08:43:30.675386952 +0000 UTC m=+491.644155488" Mar 20 08:43:31.016495 master-0 kubenswrapper[7476]: I0320 08:43:31.016354 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:31.016495 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:31.016495 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:31.016495 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:31.016495 master-0 kubenswrapper[7476]: I0320 08:43:31.016451 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:32.016670 master-0 kubenswrapper[7476]: I0320 08:43:32.016575 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:32.016670 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:32.016670 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:32.016670 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:32.017491 master-0 kubenswrapper[7476]: I0320 08:43:32.016679 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:33.017707 master-0 kubenswrapper[7476]: I0320 08:43:33.017604 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:33.017707 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:33.017707 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:33.017707 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:33.017707 master-0 kubenswrapper[7476]: I0320 08:43:33.017702 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:34.016897 master-0 kubenswrapper[7476]: I0320 08:43:34.016806 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:34.016897 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:34.016897 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:34.016897 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:34.017678 master-0 kubenswrapper[7476]: I0320 08:43:34.016904 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:35.017867 master-0 kubenswrapper[7476]: I0320 08:43:35.017782 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:35.017867 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:35.017867 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:35.017867 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:35.018837 master-0 kubenswrapper[7476]: I0320 08:43:35.017868 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:35.426467 master-0 kubenswrapper[7476]: I0320 08:43:35.426387 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 20 08:43:35.428579 master-0 kubenswrapper[7476]: I0320 08:43:35.428536 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.433761 master-0 kubenswrapper[7476]: I0320 08:43:35.432485 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9xqm8" Mar 20 08:43:35.435563 master-0 kubenswrapper[7476]: I0320 08:43:35.435488 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 08:43:35.449173 master-0 kubenswrapper[7476]: I0320 08:43:35.449097 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 20 08:43:35.496261 master-0 kubenswrapper[7476]: I0320 08:43:35.496182 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.496261 master-0 kubenswrapper[7476]: I0320 08:43:35.496254 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.496677 master-0 kubenswrapper[7476]: I0320 08:43:35.496393 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.598034 master-0 kubenswrapper[7476]: I0320 08:43:35.597814 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.598034 master-0 kubenswrapper[7476]: I0320 08:43:35.597972 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.598386 master-0 kubenswrapper[7476]: I0320 08:43:35.598049 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.598386 master-0 kubenswrapper[7476]: I0320 08:43:35.598135 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.598386 master-0 kubenswrapper[7476]: I0320 08:43:35.598297 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.623637 master-0 kubenswrapper[7476]: I0320 08:43:35.623584 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:35.824487 master-0 kubenswrapper[7476]: I0320 08:43:35.824365 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:43:36.019440 master-0 kubenswrapper[7476]: I0320 08:43:36.018867 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:36.019440 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:36.019440 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:36.019440 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:36.019440 master-0 kubenswrapper[7476]: I0320 08:43:36.018959 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:36.287221 master-0 kubenswrapper[7476]: I0320 08:43:36.287159 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 20 08:43:36.697216 master-0 kubenswrapper[7476]: I0320 08:43:36.697125 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42","Type":"ContainerStarted","Data":"f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390"} Mar 20 08:43:36.697216 master-0 kubenswrapper[7476]: I0320 08:43:36.697204 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42","Type":"ContainerStarted","Data":"9eac94494062f535be8b293b2f2113ec6d14445294c7b479b20d6d3d901dd5ca"} Mar 20 08:43:36.734327 master-0 kubenswrapper[7476]: I0320 08:43:36.729333 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=1.729302397 podStartE2EDuration="1.729302397s" podCreationTimestamp="2026-03-20 08:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:36.720832524 +0000 UTC m=+497.689601070" watchObservedRunningTime="2026-03-20 08:43:36.729302397 +0000 UTC m=+497.698070953" Mar 20 08:43:37.018201 master-0 kubenswrapper[7476]: I0320 08:43:37.017880 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:37.018201 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:37.018201 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:37.018201 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:37.018201 master-0 kubenswrapper[7476]: I0320 08:43:37.017963 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:38.023524 master-0 kubenswrapper[7476]: I0320 08:43:38.023446 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:38.023524 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:38.023524 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:38.023524 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:38.024457 master-0 kubenswrapper[7476]: I0320 08:43:38.023554 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:39.016904 master-0 kubenswrapper[7476]: I0320 08:43:39.016804 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:39.016904 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:39.016904 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:39.016904 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:39.016904 master-0 kubenswrapper[7476]: I0320 08:43:39.016886 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:40.017189 master-0 kubenswrapper[7476]: I0320 08:43:40.017091 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:40.017189 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:40.017189 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:40.017189 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:40.017189 master-0 kubenswrapper[7476]: I0320 08:43:40.017177 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:41.017561 master-0 kubenswrapper[7476]: I0320 08:43:41.017485 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:41.017561 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:41.017561 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:41.017561 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:41.017561 master-0 kubenswrapper[7476]: I0320 08:43:41.017559 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:41.416741 master-0 kubenswrapper[7476]: I0320 08:43:41.416657 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 20 08:43:41.417073 master-0 kubenswrapper[7476]: I0320 08:43:41.416923 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" containerName="installer" containerID="cri-o://f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390" gracePeriod=30 Mar 20 08:43:41.741801 master-0 kubenswrapper[7476]: I0320 08:43:41.741619 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 20 08:43:41.743356 master-0 kubenswrapper[7476]: I0320 08:43:41.743309 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.748931 master-0 kubenswrapper[7476]: I0320 08:43:41.748864 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-6mj58" Mar 20 08:43:41.752641 master-0 kubenswrapper[7476]: I0320 08:43:41.752575 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 20 08:43:41.767789 master-0 kubenswrapper[7476]: I0320 08:43:41.762002 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 20 08:43:41.797983 master-0 kubenswrapper[7476]: I0320 08:43:41.797916 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-var-lock\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.798231 master-0 kubenswrapper[7476]: I0320 08:43:41.798113 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.798399 master-0 kubenswrapper[7476]: I0320 08:43:41.798344 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92600726-933f-41eb-a329-1fcc68dc95c1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.899429 master-0 kubenswrapper[7476]: I0320 08:43:41.899359 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.899699 master-0 kubenswrapper[7476]: I0320 08:43:41.899580 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.899783 master-0 kubenswrapper[7476]: I0320 08:43:41.899699 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92600726-933f-41eb-a329-1fcc68dc95c1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.899889 master-0 kubenswrapper[7476]: I0320 08:43:41.899859 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-var-lock\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.900032 master-0 kubenswrapper[7476]: I0320 08:43:41.899988 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-var-lock\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:41.932738 master-0 kubenswrapper[7476]: I0320 08:43:41.932661 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92600726-933f-41eb-a329-1fcc68dc95c1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:42.017451 master-0 kubenswrapper[7476]: I0320 08:43:42.017284 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:42.017451 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:42.017451 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:42.017451 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:42.017451 master-0 kubenswrapper[7476]: I0320 08:43:42.017396 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:42.076930 master-0 kubenswrapper[7476]: I0320 08:43:42.076849 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:43:42.532565 master-0 kubenswrapper[7476]: I0320 08:43:42.532467 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 20 08:43:42.542690 master-0 kubenswrapper[7476]: W0320 08:43:42.542603 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod92600726_933f_41eb_a329_1fcc68dc95c1.slice/crio-63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75 WatchSource:0}: Error finding container 63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75: Status 404 returned error can't find the container with id 63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75 Mar 20 08:43:42.751637 master-0 kubenswrapper[7476]: I0320 08:43:42.751591 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"92600726-933f-41eb-a329-1fcc68dc95c1","Type":"ContainerStarted","Data":"63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75"} Mar 20 08:43:43.017582 master-0 kubenswrapper[7476]: I0320 08:43:43.017468 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:43.017582 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:43.017582 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:43.017582 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:43.018780 master-0 kubenswrapper[7476]: I0320 08:43:43.017586 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:43.762591 master-0 kubenswrapper[7476]: I0320 08:43:43.762488 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"92600726-933f-41eb-a329-1fcc68dc95c1","Type":"ContainerStarted","Data":"37909af3090055d773495c88ec18992da7d8fea5935c4a6afb5893aaa0a777f4"} Mar 20 08:43:43.792596 master-0 kubenswrapper[7476]: I0320 08:43:43.792504 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=2.79247377 podStartE2EDuration="2.79247377s" podCreationTimestamp="2026-03-20 08:43:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:43.786714482 +0000 UTC m=+504.755483088" watchObservedRunningTime="2026-03-20 08:43:43.79247377 +0000 UTC m=+504.761242336" Mar 20 08:43:44.018921 master-0 kubenswrapper[7476]: I0320 08:43:44.017714 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:44.018921 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:44.018921 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:44.018921 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:44.020313 master-0 kubenswrapper[7476]: I0320 08:43:44.019490 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:45.017831 master-0 kubenswrapper[7476]: I0320 08:43:45.017751 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:45.017831 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:45.017831 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:45.017831 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:45.018258 master-0 kubenswrapper[7476]: I0320 08:43:45.017851 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:46.017808 master-0 kubenswrapper[7476]: I0320 08:43:46.017719 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 20 08:43:46.020477 master-0 kubenswrapper[7476]: I0320 08:43:46.020422 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.020705 master-0 kubenswrapper[7476]: I0320 08:43:46.020648 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:46.020705 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:46.020705 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:46.020705 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:46.020897 master-0 kubenswrapper[7476]: I0320 08:43:46.020746 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:46.052220 master-0 kubenswrapper[7476]: I0320 08:43:46.052127 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 20 08:43:46.078065 master-0 kubenswrapper[7476]: I0320 08:43:46.077960 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-var-lock\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.078306 master-0 kubenswrapper[7476]: I0320 08:43:46.078135 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.078580 master-0 kubenswrapper[7476]: I0320 08:43:46.078534 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.179907 master-0 kubenswrapper[7476]: I0320 08:43:46.179821 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-var-lock\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.179907 master-0 kubenswrapper[7476]: I0320 08:43:46.179871 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.179907 master-0 kubenswrapper[7476]: I0320 08:43:46.179933 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.180396 master-0 kubenswrapper[7476]: I0320 08:43:46.179968 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-var-lock\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.180396 master-0 kubenswrapper[7476]: I0320 08:43:46.180125 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.205670 master-0 kubenswrapper[7476]: I0320 08:43:46.205596 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.338775 master-0 kubenswrapper[7476]: I0320 08:43:46.338511 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:43:46.828092 master-0 kubenswrapper[7476]: I0320 08:43:46.828000 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 20 08:43:46.839453 master-0 kubenswrapper[7476]: W0320 08:43:46.838714 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod5cdd5ac8_4c2e_4680_b697_0e5d94136fe4.slice/crio-4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861 WatchSource:0}: Error finding container 4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861: Status 404 returned error can't find the container with id 4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861 Mar 20 08:43:47.018414 master-0 kubenswrapper[7476]: I0320 08:43:47.018373 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:47.018414 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:47.018414 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:47.018414 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:47.019024 master-0 kubenswrapper[7476]: I0320 08:43:47.018998 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:47.800675 master-0 kubenswrapper[7476]: I0320 08:43:47.800548 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4","Type":"ContainerStarted","Data":"6431ba0942f1d93ec67e79edabc01c308dcb065395ccf7185622d3bd7f0075b2"} Mar 20 08:43:47.800675 master-0 kubenswrapper[7476]: I0320 08:43:47.800679 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4","Type":"ContainerStarted","Data":"4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861"} Mar 20 08:43:47.842246 master-0 kubenswrapper[7476]: I0320 08:43:47.842144 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=1.8421184369999999 podStartE2EDuration="1.842118437s" podCreationTimestamp="2026-03-20 08:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:47.836010279 +0000 UTC m=+508.804778875" watchObservedRunningTime="2026-03-20 08:43:47.842118437 +0000 UTC m=+508.810887003" Mar 20 08:43:48.018401 master-0 kubenswrapper[7476]: I0320 08:43:48.017892 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:48.018401 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:48.018401 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:48.018401 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:48.018401 master-0 kubenswrapper[7476]: I0320 08:43:48.017977 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:48.302163 master-0 kubenswrapper[7476]: I0320 08:43:48.302060 7476 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:43:48.302804 master-0 kubenswrapper[7476]: I0320 08:43:48.302735 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" containerID="cri-o://668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f" gracePeriod=30 Mar 20 08:43:48.303046 master-0 kubenswrapper[7476]: I0320 08:43:48.302999 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" containerID="cri-o://58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f" gracePeriod=30 Mar 20 08:43:48.303173 master-0 kubenswrapper[7476]: I0320 08:43:48.303118 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" containerID="cri-o://32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7" gracePeriod=30 Mar 20 08:43:48.303367 master-0 kubenswrapper[7476]: I0320 08:43:48.303302 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" containerID="cri-o://61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673" gracePeriod=30 Mar 20 08:43:48.303466 master-0 kubenswrapper[7476]: I0320 08:43:48.303418 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" containerID="cri-o://67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052" gracePeriod=30 Mar 20 08:43:48.306966 master-0 kubenswrapper[7476]: I0320 08:43:48.306891 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:43:48.307495 master-0 kubenswrapper[7476]: E0320 08:43:48.307431 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 20 08:43:48.307495 master-0 kubenswrapper[7476]: I0320 08:43:48.307476 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307510 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307527 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307548 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307564 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-resources-copy" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307590 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307605 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307639 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307654 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307678 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307693 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="setup" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307718 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307733 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-ensure-env-vars" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: E0320 08:43:48.307759 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 20 08:43:48.307751 master-0 kubenswrapper[7476]: I0320 08:43:48.307774 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 20 08:43:48.308968 master-0 kubenswrapper[7476]: I0320 08:43:48.308065 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcdctl" Mar 20 08:43:48.308968 master-0 kubenswrapper[7476]: I0320 08:43:48.308092 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-rev" Mar 20 08:43:48.308968 master-0 kubenswrapper[7476]: I0320 08:43:48.308110 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-readyz" Mar 20 08:43:48.308968 master-0 kubenswrapper[7476]: I0320 08:43:48.308128 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd" Mar 20 08:43:48.308968 master-0 kubenswrapper[7476]: I0320 08:43:48.308186 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="24b4ed170d527099878cb5fdd508a2fb" containerName="etcd-metrics" Mar 20 08:43:48.431957 master-0 kubenswrapper[7476]: I0320 08:43:48.431911 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.432075 master-0 kubenswrapper[7476]: I0320 08:43:48.431984 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.432314 master-0 kubenswrapper[7476]: I0320 08:43:48.432238 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.432442 master-0 kubenswrapper[7476]: I0320 08:43:48.432416 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.432568 master-0 kubenswrapper[7476]: I0320 08:43:48.432541 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.432622 master-0 kubenswrapper[7476]: I0320 08:43:48.432595 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534089 master-0 kubenswrapper[7476]: I0320 08:43:48.534020 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534089 master-0 kubenswrapper[7476]: I0320 08:43:48.534080 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534343 master-0 kubenswrapper[7476]: I0320 08:43:48.534143 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534343 master-0 kubenswrapper[7476]: I0320 08:43:48.534162 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534343 master-0 kubenswrapper[7476]: I0320 08:43:48.534184 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534343 master-0 kubenswrapper[7476]: I0320 08:43:48.534209 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534343 master-0 kubenswrapper[7476]: I0320 08:43:48.534320 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534639 master-0 kubenswrapper[7476]: I0320 08:43:48.534358 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534639 master-0 kubenswrapper[7476]: I0320 08:43:48.534453 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.534639 master-0 kubenswrapper[7476]: I0320 08:43:48.534611 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.535128 master-0 kubenswrapper[7476]: I0320 08:43:48.534675 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.535128 master-0 kubenswrapper[7476]: I0320 08:43:48.534711 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:43:48.813129 master-0 kubenswrapper[7476]: I0320 08:43:48.813045 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 20 08:43:48.814793 master-0 kubenswrapper[7476]: I0320 08:43:48.814737 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 20 08:43:48.818884 master-0 kubenswrapper[7476]: I0320 08:43:48.818813 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f" exitCode=2 Mar 20 08:43:48.818884 master-0 kubenswrapper[7476]: I0320 08:43:48.818875 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7" exitCode=0 Mar 20 08:43:48.819039 master-0 kubenswrapper[7476]: I0320 08:43:48.818897 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673" exitCode=2 Mar 20 08:43:49.017806 master-0 kubenswrapper[7476]: I0320 08:43:49.017695 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:49.017806 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:49.017806 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:49.017806 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:49.018316 master-0 kubenswrapper[7476]: I0320 08:43:49.017806 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:50.017989 master-0 kubenswrapper[7476]: I0320 08:43:50.017910 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:50.017989 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:50.017989 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:50.017989 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:50.018836 master-0 kubenswrapper[7476]: I0320 08:43:50.018004 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:51.018051 master-0 kubenswrapper[7476]: I0320 08:43:51.017902 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:51.018051 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:51.018051 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:51.018051 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:51.018051 master-0 kubenswrapper[7476]: I0320 08:43:51.018043 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:52.017861 master-0 kubenswrapper[7476]: I0320 08:43:52.017748 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:52.017861 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:52.017861 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:52.017861 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:52.019075 master-0 kubenswrapper[7476]: I0320 08:43:52.017842 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:53.016712 master-0 kubenswrapper[7476]: I0320 08:43:53.016644 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:53.016712 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:53.016712 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:53.016712 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:53.017647 master-0 kubenswrapper[7476]: I0320 08:43:53.017465 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:54.016891 master-0 kubenswrapper[7476]: I0320 08:43:54.016800 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:54.016891 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:54.016891 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:54.016891 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:54.018046 master-0 kubenswrapper[7476]: I0320 08:43:54.016923 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:55.016937 master-0 kubenswrapper[7476]: I0320 08:43:55.016841 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:55.016937 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:55.016937 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:55.016937 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:55.016937 master-0 kubenswrapper[7476]: I0320 08:43:55.016943 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:56.018558 master-0 kubenswrapper[7476]: I0320 08:43:56.018438 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:56.018558 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:56.018558 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:56.018558 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:56.018558 master-0 kubenswrapper[7476]: I0320 08:43:56.018539 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:57.017039 master-0 kubenswrapper[7476]: I0320 08:43:57.016970 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:57.017039 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:57.017039 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:57.017039 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:57.017634 master-0 kubenswrapper[7476]: I0320 08:43:57.017587 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:58.016788 master-0 kubenswrapper[7476]: I0320 08:43:58.016722 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:58.016788 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:58.016788 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:58.016788 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:58.017972 master-0 kubenswrapper[7476]: I0320 08:43:58.017439 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:43:58.562430 master-0 kubenswrapper[7476]: E0320 08:43:58.562327 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:43:59.017627 master-0 kubenswrapper[7476]: I0320 08:43:59.017542 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:43:59.017627 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:43:59.017627 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:43:59.017627 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:43:59.018728 master-0 kubenswrapper[7476]: I0320 08:43:59.017630 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:00.017520 master-0 kubenswrapper[7476]: I0320 08:44:00.017422 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:00.017520 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:00.017520 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:00.017520 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:00.017520 master-0 kubenswrapper[7476]: I0320 08:44:00.017514 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:00.018529 master-0 kubenswrapper[7476]: I0320 08:44:00.017582 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:44:00.018600 master-0 kubenswrapper[7476]: I0320 08:44:00.018568 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"f6fca13f29777c3e581624d7e050cfe207017354f2d3e38a35c450e9f709ea25"} pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" containerMessage="Container router failed startup probe, will be restarted" Mar 20 08:44:00.018697 master-0 kubenswrapper[7476]: I0320 08:44:00.018656 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" containerID="cri-o://f6fca13f29777c3e581624d7e050cfe207017354f2d3e38a35c450e9f709ea25" gracePeriod=3600 Mar 20 08:44:01.940744 master-0 kubenswrapper[7476]: I0320 08:44:01.940666 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:44:01.941628 master-0 kubenswrapper[7476]: I0320 08:44:01.940770 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" exitCode=1 Mar 20 08:44:01.941628 master-0 kubenswrapper[7476]: I0320 08:44:01.940826 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerDied","Data":"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e"} Mar 20 08:44:01.941910 master-0 kubenswrapper[7476]: I0320 08:44:01.941765 7476 scope.go:117] "RemoveContainer" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:44:02.315466 master-0 kubenswrapper[7476]: E0320 08:44:02.315345 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:43:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:43:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:43:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:43:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:02.951519 master-0 kubenswrapper[7476]: I0320 08:44:02.951415 7476 generic.go:334] "Generic (PLEG): container finished" podID="26923e70-56a5-4020-8b55-510879ec6fd4" containerID="4efa2d7ff0f9f10f26d4d217feeb2ea6ecccefb675bc71c18faa7c5fe6db33c6" exitCode=0 Mar 20 08:44:02.952390 master-0 kubenswrapper[7476]: I0320 08:44:02.951526 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"26923e70-56a5-4020-8b55-510879ec6fd4","Type":"ContainerDied","Data":"4efa2d7ff0f9f10f26d4d217feeb2ea6ecccefb675bc71c18faa7c5fe6db33c6"} Mar 20 08:44:02.957109 master-0 kubenswrapper[7476]: I0320 08:44:02.957046 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:44:02.957335 master-0 kubenswrapper[7476]: I0320 08:44:02.957113 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2"} Mar 20 08:44:03.779639 master-0 kubenswrapper[7476]: I0320 08:44:03.779557 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:44:03.780221 master-0 kubenswrapper[7476]: I0320 08:44:03.779726 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:44:03.787511 master-0 kubenswrapper[7476]: I0320 08:44:03.787439 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:44:04.318122 master-0 kubenswrapper[7476]: I0320 08:44:04.317944 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 20 08:44:04.453055 master-0 kubenswrapper[7476]: I0320 08:44:04.452985 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-var-lock\") pod \"26923e70-56a5-4020-8b55-510879ec6fd4\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " Mar 20 08:44:04.453209 master-0 kubenswrapper[7476]: I0320 08:44:04.453145 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-var-lock" (OuterVolumeSpecName: "var-lock") pod "26923e70-56a5-4020-8b55-510879ec6fd4" (UID: "26923e70-56a5-4020-8b55-510879ec6fd4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:04.453406 master-0 kubenswrapper[7476]: I0320 08:44:04.453363 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26923e70-56a5-4020-8b55-510879ec6fd4-kube-api-access\") pod \"26923e70-56a5-4020-8b55-510879ec6fd4\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " Mar 20 08:44:04.453506 master-0 kubenswrapper[7476]: I0320 08:44:04.453427 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-kubelet-dir\") pod \"26923e70-56a5-4020-8b55-510879ec6fd4\" (UID: \"26923e70-56a5-4020-8b55-510879ec6fd4\") " Mar 20 08:44:04.453728 master-0 kubenswrapper[7476]: I0320 08:44:04.453668 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "26923e70-56a5-4020-8b55-510879ec6fd4" (UID: "26923e70-56a5-4020-8b55-510879ec6fd4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:04.454158 master-0 kubenswrapper[7476]: I0320 08:44:04.454112 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:04.454158 master-0 kubenswrapper[7476]: I0320 08:44:04.454151 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/26923e70-56a5-4020-8b55-510879ec6fd4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:04.458175 master-0 kubenswrapper[7476]: I0320 08:44:04.458047 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26923e70-56a5-4020-8b55-510879ec6fd4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "26923e70-56a5-4020-8b55-510879ec6fd4" (UID: "26923e70-56a5-4020-8b55-510879ec6fd4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:04.555436 master-0 kubenswrapper[7476]: I0320 08:44:04.555367 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/26923e70-56a5-4020-8b55-510879ec6fd4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:04.975227 master-0 kubenswrapper[7476]: I0320 08:44:04.975132 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"26923e70-56a5-4020-8b55-510879ec6fd4","Type":"ContainerDied","Data":"c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc"} Mar 20 08:44:04.975227 master-0 kubenswrapper[7476]: I0320 08:44:04.975190 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 20 08:44:04.975227 master-0 kubenswrapper[7476]: I0320 08:44:04.975213 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc" Mar 20 08:44:04.978298 master-0 kubenswrapper[7476]: I0320 08:44:04.978219 7476 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="2af5fddc5d2a375dc416488e9df9292dbf88621bcffa837acf0f758641cfece0" exitCode=1 Mar 20 08:44:04.978432 master-0 kubenswrapper[7476]: I0320 08:44:04.978338 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerDied","Data":"2af5fddc5d2a375dc416488e9df9292dbf88621bcffa837acf0f758641cfece0"} Mar 20 08:44:04.978525 master-0 kubenswrapper[7476]: I0320 08:44:04.978465 7476 scope.go:117] "RemoveContainer" containerID="f3cf6c6c759bc79e0c49a7c2679b7d5ff1593a53a6783b3355ac6464233ad33d" Mar 20 08:44:04.979218 master-0 kubenswrapper[7476]: I0320 08:44:04.979161 7476 scope.go:117] "RemoveContainer" containerID="2af5fddc5d2a375dc416488e9df9292dbf88621bcffa837acf0f758641cfece0" Mar 20 08:44:04.979485 master-0 kubenswrapper[7476]: E0320 08:44:04.979439 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" Mar 20 08:44:07.837850 master-0 kubenswrapper[7476]: I0320 08:44:07.837798 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_76ccbbad-62cd-4fdd-8a22-3299f9ef3b42/installer/0.log" Mar 20 08:44:07.838648 master-0 kubenswrapper[7476]: I0320 08:44:07.837873 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:44:08.006437 master-0 kubenswrapper[7476]: I0320 08:44:08.006358 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_76ccbbad-62cd-4fdd-8a22-3299f9ef3b42/installer/0.log" Mar 20 08:44:08.006719 master-0 kubenswrapper[7476]: I0320 08:44:08.006464 7476 generic.go:334] "Generic (PLEG): container finished" podID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" containerID="f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390" exitCode=1 Mar 20 08:44:08.006719 master-0 kubenswrapper[7476]: I0320 08:44:08.006526 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42","Type":"ContainerDied","Data":"f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390"} Mar 20 08:44:08.006719 master-0 kubenswrapper[7476]: I0320 08:44:08.006556 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 20 08:44:08.006719 master-0 kubenswrapper[7476]: I0320 08:44:08.006582 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42","Type":"ContainerDied","Data":"9eac94494062f535be8b293b2f2113ec6d14445294c7b479b20d6d3d901dd5ca"} Mar 20 08:44:08.007047 master-0 kubenswrapper[7476]: I0320 08:44:08.006708 7476 scope.go:117] "RemoveContainer" containerID="f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390" Mar 20 08:44:08.007381 master-0 kubenswrapper[7476]: I0320 08:44:08.007311 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-var-lock\") pod \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " Mar 20 08:44:08.007494 master-0 kubenswrapper[7476]: I0320 08:44:08.007459 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-var-lock" (OuterVolumeSpecName: "var-lock") pod "76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" (UID: "76ccbbad-62cd-4fdd-8a22-3299f9ef3b42"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:08.007623 master-0 kubenswrapper[7476]: I0320 08:44:08.007584 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kube-api-access\") pod \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " Mar 20 08:44:08.007732 master-0 kubenswrapper[7476]: I0320 08:44:08.007663 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kubelet-dir\") pod \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\" (UID: \"76ccbbad-62cd-4fdd-8a22-3299f9ef3b42\") " Mar 20 08:44:08.008696 master-0 kubenswrapper[7476]: I0320 08:44:08.008017 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" (UID: "76ccbbad-62cd-4fdd-8a22-3299f9ef3b42"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:08.008696 master-0 kubenswrapper[7476]: I0320 08:44:08.008145 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:08.008696 master-0 kubenswrapper[7476]: I0320 08:44:08.008173 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:08.012876 master-0 kubenswrapper[7476]: I0320 08:44:08.012817 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" (UID: "76ccbbad-62cd-4fdd-8a22-3299f9ef3b42"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:08.061505 master-0 kubenswrapper[7476]: I0320 08:44:08.061433 7476 scope.go:117] "RemoveContainer" containerID="f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390" Mar 20 08:44:08.062886 master-0 kubenswrapper[7476]: E0320 08:44:08.062830 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390\": container with ID starting with f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390 not found: ID does not exist" containerID="f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390" Mar 20 08:44:08.063001 master-0 kubenswrapper[7476]: I0320 08:44:08.062889 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390"} err="failed to get container status \"f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390\": rpc error: code = NotFound desc = could not find container \"f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390\": container with ID starting with f4c9d1f2d90b8520a4431bf4cc51a64c153b1c2c33c18a8c1ac5f565c0a2c390 not found: ID does not exist" Mar 20 08:44:08.109398 master-0 kubenswrapper[7476]: I0320 08:44:08.109328 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:08.563348 master-0 kubenswrapper[7476]: E0320 08:44:08.563043 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:12.316310 master-0 kubenswrapper[7476]: E0320 08:44:12.316229 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:13.783907 master-0 kubenswrapper[7476]: I0320 08:44:13.783840 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:44:16.077537 master-0 kubenswrapper[7476]: I0320 08:44:16.077464 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_fae0c983-2cb4-4749-97ff-a718a9fb6563/installer/0.log" Mar 20 08:44:16.077537 master-0 kubenswrapper[7476]: I0320 08:44:16.077538 7476 generic.go:334] "Generic (PLEG): container finished" podID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerID="8db9b6351ac69b67c8e87136c1df3fa9a0513a97038d7ea0f58a226f57e933df" exitCode=1 Mar 20 08:44:16.078564 master-0 kubenswrapper[7476]: I0320 08:44:16.077575 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"fae0c983-2cb4-4749-97ff-a718a9fb6563","Type":"ContainerDied","Data":"8db9b6351ac69b67c8e87136c1df3fa9a0513a97038d7ea0f58a226f57e933df"} Mar 20 08:44:16.236971 master-0 kubenswrapper[7476]: I0320 08:44:16.236880 7476 scope.go:117] "RemoveContainer" containerID="2af5fddc5d2a375dc416488e9df9292dbf88621bcffa837acf0f758641cfece0" Mar 20 08:44:17.084170 master-0 kubenswrapper[7476]: I0320 08:44:17.084040 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"c83737980b9ee109184b1d78e942cf36","Type":"ContainerStarted","Data":"44e6488658001ec197750deb888ad4cc53ef741359268344dae6149df1e9b900"} Mar 20 08:44:17.409551 master-0 kubenswrapper[7476]: I0320 08:44:17.409498 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_fae0c983-2cb4-4749-97ff-a718a9fb6563/installer/0.log" Mar 20 08:44:17.409760 master-0 kubenswrapper[7476]: I0320 08:44:17.409594 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:44:17.587066 master-0 kubenswrapper[7476]: I0320 08:44:17.586977 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fae0c983-2cb4-4749-97ff-a718a9fb6563-kube-api-access\") pod \"fae0c983-2cb4-4749-97ff-a718a9fb6563\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " Mar 20 08:44:17.587347 master-0 kubenswrapper[7476]: I0320 08:44:17.587110 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-kubelet-dir\") pod \"fae0c983-2cb4-4749-97ff-a718a9fb6563\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " Mar 20 08:44:17.587552 master-0 kubenswrapper[7476]: I0320 08:44:17.587481 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fae0c983-2cb4-4749-97ff-a718a9fb6563" (UID: "fae0c983-2cb4-4749-97ff-a718a9fb6563"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:17.587594 master-0 kubenswrapper[7476]: I0320 08:44:17.587511 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-var-lock\") pod \"fae0c983-2cb4-4749-97ff-a718a9fb6563\" (UID: \"fae0c983-2cb4-4749-97ff-a718a9fb6563\") " Mar 20 08:44:17.587626 master-0 kubenswrapper[7476]: I0320 08:44:17.587596 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-var-lock" (OuterVolumeSpecName: "var-lock") pod "fae0c983-2cb4-4749-97ff-a718a9fb6563" (UID: "fae0c983-2cb4-4749-97ff-a718a9fb6563"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:17.588000 master-0 kubenswrapper[7476]: I0320 08:44:17.587922 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:17.588000 master-0 kubenswrapper[7476]: I0320 08:44:17.587982 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fae0c983-2cb4-4749-97ff-a718a9fb6563-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:17.591365 master-0 kubenswrapper[7476]: I0320 08:44:17.591293 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fae0c983-2cb4-4749-97ff-a718a9fb6563-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fae0c983-2cb4-4749-97ff-a718a9fb6563" (UID: "fae0c983-2cb4-4749-97ff-a718a9fb6563"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:17.688970 master-0 kubenswrapper[7476]: I0320 08:44:17.688833 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fae0c983-2cb4-4749-97ff-a718a9fb6563-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:18.099228 master-0 kubenswrapper[7476]: I0320 08:44:18.099160 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_fae0c983-2cb4-4749-97ff-a718a9fb6563/installer/0.log" Mar 20 08:44:18.100076 master-0 kubenswrapper[7476]: I0320 08:44:18.099291 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"fae0c983-2cb4-4749-97ff-a718a9fb6563","Type":"ContainerDied","Data":"4a544ba88b612fcc7b9a0c05b171f124d77f9977d6164c6ef4949c3839565381"} Mar 20 08:44:18.100076 master-0 kubenswrapper[7476]: I0320 08:44:18.099345 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a544ba88b612fcc7b9a0c05b171f124d77f9977d6164c6ef4949c3839565381" Mar 20 08:44:18.100076 master-0 kubenswrapper[7476]: I0320 08:44:18.099396 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:44:18.564511 master-0 kubenswrapper[7476]: E0320 08:44:18.564385 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:18.912656 master-0 kubenswrapper[7476]: I0320 08:44:18.912576 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 20 08:44:18.914436 master-0 kubenswrapper[7476]: I0320 08:44:18.914386 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 20 08:44:18.915780 master-0 kubenswrapper[7476]: I0320 08:44:18.915725 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 20 08:44:18.916573 master-0 kubenswrapper[7476]: I0320 08:44:18.916521 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 20 08:44:18.918488 master-0 kubenswrapper[7476]: I0320 08:44:18.918445 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 20 08:44:19.110285 master-0 kubenswrapper[7476]: I0320 08:44:19.110199 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110323 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir" (OuterVolumeSpecName: "log-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110443 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110477 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir" (OuterVolumeSpecName: "data-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110522 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110223 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-rev/0.log" Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110576 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110611 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110631 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110650 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110688 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") pod \"24b4ed170d527099878cb5fdd508a2fb\" (UID: \"24b4ed170d527099878cb5fdd508a2fb\") " Mar 20 08:44:19.110785 master-0 kubenswrapper[7476]: I0320 08:44:19.110708 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:19.111108 master-0 kubenswrapper[7476]: I0320 08:44:19.110889 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24b4ed170d527099878cb5fdd508a2fb" (UID: "24b4ed170d527099878cb5fdd508a2fb"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:19.111108 master-0 kubenswrapper[7476]: I0320 08:44:19.111047 7476 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:19.111108 master-0 kubenswrapper[7476]: I0320 08:44:19.111081 7476 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:19.111193 master-0 kubenswrapper[7476]: I0320 08:44:19.111107 7476 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:19.111193 master-0 kubenswrapper[7476]: I0320 08:44:19.111133 7476 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:19.111193 master-0 kubenswrapper[7476]: I0320 08:44:19.111157 7476 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:19.111193 master-0 kubenswrapper[7476]: I0320 08:44:19.111180 7476 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24b4ed170d527099878cb5fdd508a2fb-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:19.112398 master-0 kubenswrapper[7476]: I0320 08:44:19.112378 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd-metrics/0.log" Mar 20 08:44:19.115418 master-0 kubenswrapper[7476]: I0320 08:44:19.115383 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcd/0.log" Mar 20 08:44:19.116572 master-0 kubenswrapper[7476]: I0320 08:44:19.116524 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_24b4ed170d527099878cb5fdd508a2fb/etcdctl/0.log" Mar 20 08:44:19.117883 master-0 kubenswrapper[7476]: I0320 08:44:19.117847 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052" exitCode=137 Mar 20 08:44:19.117985 master-0 kubenswrapper[7476]: I0320 08:44:19.117885 7476 generic.go:334] "Generic (PLEG): container finished" podID="24b4ed170d527099878cb5fdd508a2fb" containerID="668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f" exitCode=137 Mar 20 08:44:19.117985 master-0 kubenswrapper[7476]: I0320 08:44:19.117932 7476 scope.go:117] "RemoveContainer" containerID="58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f" Mar 20 08:44:19.118111 master-0 kubenswrapper[7476]: I0320 08:44:19.118022 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 20 08:44:19.137513 master-0 kubenswrapper[7476]: I0320 08:44:19.137484 7476 scope.go:117] "RemoveContainer" containerID="32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7" Mar 20 08:44:19.157928 master-0 kubenswrapper[7476]: I0320 08:44:19.157859 7476 scope.go:117] "RemoveContainer" containerID="61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673" Mar 20 08:44:19.175489 master-0 kubenswrapper[7476]: I0320 08:44:19.175441 7476 scope.go:117] "RemoveContainer" containerID="67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052" Mar 20 08:44:19.196813 master-0 kubenswrapper[7476]: I0320 08:44:19.196763 7476 scope.go:117] "RemoveContainer" containerID="668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f" Mar 20 08:44:19.209889 master-0 kubenswrapper[7476]: I0320 08:44:19.209835 7476 scope.go:117] "RemoveContainer" containerID="dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1" Mar 20 08:44:19.227105 master-0 kubenswrapper[7476]: I0320 08:44:19.227066 7476 scope.go:117] "RemoveContainer" containerID="6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f" Mar 20 08:44:19.244141 master-0 kubenswrapper[7476]: I0320 08:44:19.244106 7476 scope.go:117] "RemoveContainer" containerID="4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30" Mar 20 08:44:19.250652 master-0 kubenswrapper[7476]: I0320 08:44:19.250583 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24b4ed170d527099878cb5fdd508a2fb" path="/var/lib/kubelet/pods/24b4ed170d527099878cb5fdd508a2fb/volumes" Mar 20 08:44:19.260823 master-0 kubenswrapper[7476]: I0320 08:44:19.260766 7476 scope.go:117] "RemoveContainer" containerID="58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f" Mar 20 08:44:19.261388 master-0 kubenswrapper[7476]: E0320 08:44:19.261335 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f\": container with ID starting with 58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f not found: ID does not exist" containerID="58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f" Mar 20 08:44:19.261459 master-0 kubenswrapper[7476]: I0320 08:44:19.261397 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f"} err="failed to get container status \"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f\": rpc error: code = NotFound desc = could not find container \"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f\": container with ID starting with 58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f not found: ID does not exist" Mar 20 08:44:19.261459 master-0 kubenswrapper[7476]: I0320 08:44:19.261431 7476 scope.go:117] "RemoveContainer" containerID="32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7" Mar 20 08:44:19.261780 master-0 kubenswrapper[7476]: E0320 08:44:19.261729 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7\": container with ID starting with 32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7 not found: ID does not exist" containerID="32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7" Mar 20 08:44:19.261829 master-0 kubenswrapper[7476]: I0320 08:44:19.261784 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7"} err="failed to get container status \"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7\": rpc error: code = NotFound desc = could not find container \"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7\": container with ID starting with 32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7 not found: ID does not exist" Mar 20 08:44:19.261882 master-0 kubenswrapper[7476]: I0320 08:44:19.261827 7476 scope.go:117] "RemoveContainer" containerID="61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673" Mar 20 08:44:19.262253 master-0 kubenswrapper[7476]: E0320 08:44:19.262206 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673\": container with ID starting with 61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673 not found: ID does not exist" containerID="61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673" Mar 20 08:44:19.262342 master-0 kubenswrapper[7476]: I0320 08:44:19.262254 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673"} err="failed to get container status \"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673\": rpc error: code = NotFound desc = could not find container \"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673\": container with ID starting with 61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673 not found: ID does not exist" Mar 20 08:44:19.262342 master-0 kubenswrapper[7476]: I0320 08:44:19.262328 7476 scope.go:117] "RemoveContainer" containerID="67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052" Mar 20 08:44:19.262762 master-0 kubenswrapper[7476]: E0320 08:44:19.262710 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052\": container with ID starting with 67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052 not found: ID does not exist" containerID="67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052" Mar 20 08:44:19.262819 master-0 kubenswrapper[7476]: I0320 08:44:19.262755 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052"} err="failed to get container status \"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052\": rpc error: code = NotFound desc = could not find container \"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052\": container with ID starting with 67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052 not found: ID does not exist" Mar 20 08:44:19.262819 master-0 kubenswrapper[7476]: I0320 08:44:19.262783 7476 scope.go:117] "RemoveContainer" containerID="668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f" Mar 20 08:44:19.263236 master-0 kubenswrapper[7476]: E0320 08:44:19.263149 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f\": container with ID starting with 668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f not found: ID does not exist" containerID="668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f" Mar 20 08:44:19.263323 master-0 kubenswrapper[7476]: I0320 08:44:19.263240 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f"} err="failed to get container status \"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f\": rpc error: code = NotFound desc = could not find container \"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f\": container with ID starting with 668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f not found: ID does not exist" Mar 20 08:44:19.263323 master-0 kubenswrapper[7476]: I0320 08:44:19.263288 7476 scope.go:117] "RemoveContainer" containerID="dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1" Mar 20 08:44:19.263715 master-0 kubenswrapper[7476]: E0320 08:44:19.263669 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1\": container with ID starting with dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1 not found: ID does not exist" containerID="dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1" Mar 20 08:44:19.263775 master-0 kubenswrapper[7476]: I0320 08:44:19.263709 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1"} err="failed to get container status \"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1\": rpc error: code = NotFound desc = could not find container \"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1\": container with ID starting with dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1 not found: ID does not exist" Mar 20 08:44:19.263775 master-0 kubenswrapper[7476]: I0320 08:44:19.263734 7476 scope.go:117] "RemoveContainer" containerID="6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f" Mar 20 08:44:19.264105 master-0 kubenswrapper[7476]: E0320 08:44:19.264061 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f\": container with ID starting with 6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f not found: ID does not exist" containerID="6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f" Mar 20 08:44:19.264157 master-0 kubenswrapper[7476]: I0320 08:44:19.264101 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f"} err="failed to get container status \"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f\": rpc error: code = NotFound desc = could not find container \"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f\": container with ID starting with 6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f not found: ID does not exist" Mar 20 08:44:19.264157 master-0 kubenswrapper[7476]: I0320 08:44:19.264128 7476 scope.go:117] "RemoveContainer" containerID="4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30" Mar 20 08:44:19.264435 master-0 kubenswrapper[7476]: E0320 08:44:19.264385 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30\": container with ID starting with 4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30 not found: ID does not exist" containerID="4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30" Mar 20 08:44:19.264508 master-0 kubenswrapper[7476]: I0320 08:44:19.264437 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30"} err="failed to get container status \"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30\": rpc error: code = NotFound desc = could not find container \"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30\": container with ID starting with 4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30 not found: ID does not exist" Mar 20 08:44:19.264508 master-0 kubenswrapper[7476]: I0320 08:44:19.264470 7476 scope.go:117] "RemoveContainer" containerID="58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f" Mar 20 08:44:19.264867 master-0 kubenswrapper[7476]: I0320 08:44:19.264820 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f"} err="failed to get container status \"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f\": rpc error: code = NotFound desc = could not find container \"58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f\": container with ID starting with 58caa7d15ae6082156585b7e036c779bf8e89c9934e0695d43cf5c702779563f not found: ID does not exist" Mar 20 08:44:19.264867 master-0 kubenswrapper[7476]: I0320 08:44:19.264858 7476 scope.go:117] "RemoveContainer" containerID="32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7" Mar 20 08:44:19.265214 master-0 kubenswrapper[7476]: I0320 08:44:19.265169 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7"} err="failed to get container status \"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7\": rpc error: code = NotFound desc = could not find container \"32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7\": container with ID starting with 32f6d3c75c93ecf10d755e44791fef93b055aa12cdff83cca0ff4f63521e98b7 not found: ID does not exist" Mar 20 08:44:19.265214 master-0 kubenswrapper[7476]: I0320 08:44:19.265205 7476 scope.go:117] "RemoveContainer" containerID="61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673" Mar 20 08:44:19.265546 master-0 kubenswrapper[7476]: I0320 08:44:19.265482 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673"} err="failed to get container status \"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673\": rpc error: code = NotFound desc = could not find container \"61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673\": container with ID starting with 61a779fd3c3897cc8dee022902187bf9b51c40a268a410f54bb84ceb49e71673 not found: ID does not exist" Mar 20 08:44:19.265603 master-0 kubenswrapper[7476]: I0320 08:44:19.265541 7476 scope.go:117] "RemoveContainer" containerID="67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052" Mar 20 08:44:19.265832 master-0 kubenswrapper[7476]: I0320 08:44:19.265786 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052"} err="failed to get container status \"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052\": rpc error: code = NotFound desc = could not find container \"67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052\": container with ID starting with 67caee0d33495fcfc194dc1da9f4a0c77009c74c70f8b4c698eb30cef8cc0052 not found: ID does not exist" Mar 20 08:44:19.265890 master-0 kubenswrapper[7476]: I0320 08:44:19.265828 7476 scope.go:117] "RemoveContainer" containerID="668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f" Mar 20 08:44:19.266192 master-0 kubenswrapper[7476]: I0320 08:44:19.266149 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f"} err="failed to get container status \"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f\": rpc error: code = NotFound desc = could not find container \"668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f\": container with ID starting with 668a17a21ef830c75ee12517d74904eb699d91932778e6cfd270cf2b0075af6f not found: ID does not exist" Mar 20 08:44:19.266192 master-0 kubenswrapper[7476]: I0320 08:44:19.266184 7476 scope.go:117] "RemoveContainer" containerID="dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1" Mar 20 08:44:19.266574 master-0 kubenswrapper[7476]: I0320 08:44:19.266517 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1"} err="failed to get container status \"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1\": rpc error: code = NotFound desc = could not find container \"dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1\": container with ID starting with dc65d45ac7f72e25688bf45d7e0e4891e399fd8f027d9c0a3d91c82a95e984c1 not found: ID does not exist" Mar 20 08:44:19.266635 master-0 kubenswrapper[7476]: I0320 08:44:19.266569 7476 scope.go:117] "RemoveContainer" containerID="6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f" Mar 20 08:44:19.266947 master-0 kubenswrapper[7476]: I0320 08:44:19.266904 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f"} err="failed to get container status \"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f\": rpc error: code = NotFound desc = could not find container \"6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f\": container with ID starting with 6c2ec290e5bf4c2d0a79a355f4de266d5b608b0ae16bb86874d457ca7101270f not found: ID does not exist" Mar 20 08:44:19.267007 master-0 kubenswrapper[7476]: I0320 08:44:19.266941 7476 scope.go:117] "RemoveContainer" containerID="4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30" Mar 20 08:44:19.267340 master-0 kubenswrapper[7476]: I0320 08:44:19.267295 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30"} err="failed to get container status \"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30\": rpc error: code = NotFound desc = could not find container \"4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30\": container with ID starting with 4d3e3631671c7e4cfe196fcf50b353e4e7c3754bbfee55dfea772836be73bf30 not found: ID does not exist" Mar 20 08:44:19.578101 master-0 kubenswrapper[7476]: I0320 08:44:19.578031 7476 scope.go:117] "RemoveContainer" containerID="1599b02e47e2ea84fbce4395522bc8e26c32b95a49f745d9bd324ecad71aaa11" Mar 20 08:44:22.317583 master-0 kubenswrapper[7476]: E0320 08:44:22.317441 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:22.334507 master-0 kubenswrapper[7476]: E0320 08:44:22.334127 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189e802bace1c451 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:24b4ed170d527099878cb5fdd508a2fb,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Killing,Message:Stopping container etcd-rev,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:43:48.302980177 +0000 UTC m=+509.271748743,LastTimestamp:2026-03-20 08:43:48.302980177 +0000 UTC m=+509.271748743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:44:28.197642 master-0 kubenswrapper[7476]: I0320 08:44:28.197545 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_92600726-933f-41eb-a329-1fcc68dc95c1/installer/0.log" Mar 20 08:44:28.198659 master-0 kubenswrapper[7476]: I0320 08:44:28.197686 7476 generic.go:334] "Generic (PLEG): container finished" podID="92600726-933f-41eb-a329-1fcc68dc95c1" containerID="37909af3090055d773495c88ec18992da7d8fea5935c4a6afb5893aaa0a777f4" exitCode=1 Mar 20 08:44:28.198659 master-0 kubenswrapper[7476]: I0320 08:44:28.197763 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"92600726-933f-41eb-a329-1fcc68dc95c1","Type":"ContainerDied","Data":"37909af3090055d773495c88ec18992da7d8fea5935c4a6afb5893aaa0a777f4"} Mar 20 08:44:28.565382 master-0 kubenswrapper[7476]: E0320 08:44:28.565147 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:29.565501 master-0 kubenswrapper[7476]: I0320 08:44:29.565436 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_92600726-933f-41eb-a329-1fcc68dc95c1/installer/0.log" Mar 20 08:44:29.566044 master-0 kubenswrapper[7476]: I0320 08:44:29.565546 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:44:29.670869 master-0 kubenswrapper[7476]: I0320 08:44:29.670808 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92600726-933f-41eb-a329-1fcc68dc95c1-kube-api-access\") pod \"92600726-933f-41eb-a329-1fcc68dc95c1\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " Mar 20 08:44:29.671125 master-0 kubenswrapper[7476]: I0320 08:44:29.670887 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-kubelet-dir\") pod \"92600726-933f-41eb-a329-1fcc68dc95c1\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " Mar 20 08:44:29.671125 master-0 kubenswrapper[7476]: I0320 08:44:29.670987 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-var-lock\") pod \"92600726-933f-41eb-a329-1fcc68dc95c1\" (UID: \"92600726-933f-41eb-a329-1fcc68dc95c1\") " Mar 20 08:44:29.671332 master-0 kubenswrapper[7476]: I0320 08:44:29.671166 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "92600726-933f-41eb-a329-1fcc68dc95c1" (UID: "92600726-933f-41eb-a329-1fcc68dc95c1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:29.671332 master-0 kubenswrapper[7476]: I0320 08:44:29.671192 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-var-lock" (OuterVolumeSpecName: "var-lock") pod "92600726-933f-41eb-a329-1fcc68dc95c1" (UID: "92600726-933f-41eb-a329-1fcc68dc95c1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:44:29.671648 master-0 kubenswrapper[7476]: I0320 08:44:29.671578 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:29.671648 master-0 kubenswrapper[7476]: I0320 08:44:29.671633 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/92600726-933f-41eb-a329-1fcc68dc95c1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:29.674758 master-0 kubenswrapper[7476]: I0320 08:44:29.674698 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92600726-933f-41eb-a329-1fcc68dc95c1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "92600726-933f-41eb-a329-1fcc68dc95c1" (UID: "92600726-933f-41eb-a329-1fcc68dc95c1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:29.772946 master-0 kubenswrapper[7476]: I0320 08:44:29.772804 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/92600726-933f-41eb-a329-1fcc68dc95c1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:44:30.213034 master-0 kubenswrapper[7476]: I0320 08:44:30.212954 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_92600726-933f-41eb-a329-1fcc68dc95c1/installer/0.log" Mar 20 08:44:30.213382 master-0 kubenswrapper[7476]: I0320 08:44:30.213076 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"92600726-933f-41eb-a329-1fcc68dc95c1","Type":"ContainerDied","Data":"63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75"} Mar 20 08:44:30.213382 master-0 kubenswrapper[7476]: I0320 08:44:30.213118 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75" Mar 20 08:44:30.213382 master-0 kubenswrapper[7476]: I0320 08:44:30.213163 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:44:30.236659 master-0 kubenswrapper[7476]: I0320 08:44:30.236514 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 20 08:44:30.257039 master-0 kubenswrapper[7476]: I0320 08:44:30.256976 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:44:30.257039 master-0 kubenswrapper[7476]: I0320 08:44:30.257026 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:44:32.318560 master-0 kubenswrapper[7476]: E0320 08:44:32.318478 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:38.566525 master-0 kubenswrapper[7476]: E0320 08:44:38.566401 7476 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:38.566525 master-0 kubenswrapper[7476]: I0320 08:44:38.566516 7476 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 08:44:42.319842 master-0 kubenswrapper[7476]: E0320 08:44:42.319749 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:44:42.319842 master-0 kubenswrapper[7476]: E0320 08:44:42.319811 7476 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:44:46.360298 master-0 kubenswrapper[7476]: I0320 08:44:46.360193 7476 generic.go:334] "Generic (PLEG): container finished" podID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerID="f6fca13f29777c3e581624d7e050cfe207017354f2d3e38a35c450e9f709ea25" exitCode=0 Mar 20 08:44:46.360298 master-0 kubenswrapper[7476]: I0320 08:44:46.360241 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerDied","Data":"f6fca13f29777c3e581624d7e050cfe207017354f2d3e38a35c450e9f709ea25"} Mar 20 08:44:46.360298 master-0 kubenswrapper[7476]: I0320 08:44:46.360299 7476 scope.go:117] "RemoveContainer" containerID="9f3b47575a455c1af61754677babc355c6032015c14d444d604fb6bbfbe54a24" Mar 20 08:44:47.373180 master-0 kubenswrapper[7476]: I0320 08:44:47.373102 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe"} Mar 20 08:44:48.014545 master-0 kubenswrapper[7476]: I0320 08:44:48.014467 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:44:48.018451 master-0 kubenswrapper[7476]: I0320 08:44:48.018365 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:48.018451 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:48.018451 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:48.018451 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:48.018808 master-0 kubenswrapper[7476]: I0320 08:44:48.018480 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:48.567484 master-0 kubenswrapper[7476]: E0320 08:44:48.567378 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 20 08:44:49.017553 master-0 kubenswrapper[7476]: I0320 08:44:49.017454 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:49.017553 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:49.017553 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:49.017553 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:49.017876 master-0 kubenswrapper[7476]: I0320 08:44:49.017600 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:49.244898 master-0 kubenswrapper[7476]: I0320 08:44:49.244849 7476 status_manager.go:851] "Failed to get status for pod" podUID="24b4ed170d527099878cb5fdd508a2fb" pod="openshift-etcd/etcd-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods etcd-master-0)" Mar 20 08:44:50.019788 master-0 kubenswrapper[7476]: I0320 08:44:50.019709 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:50.019788 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:50.019788 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:50.019788 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:50.020379 master-0 kubenswrapper[7476]: I0320 08:44:50.019799 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:51.016658 master-0 kubenswrapper[7476]: I0320 08:44:51.016529 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:51.016658 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:51.016658 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:51.016658 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:51.016924 master-0 kubenswrapper[7476]: I0320 08:44:51.016692 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:51.408119 master-0 kubenswrapper[7476]: I0320 08:44:51.408029 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/1.log" Mar 20 08:44:51.409176 master-0 kubenswrapper[7476]: I0320 08:44:51.409035 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/0.log" Mar 20 08:44:51.409996 master-0 kubenswrapper[7476]: I0320 08:44:51.409925 7476 generic.go:334] "Generic (PLEG): container finished" podID="9d653bfa-7168-49fa-a838-aedb33c7e60f" containerID="4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4" exitCode=1 Mar 20 08:44:51.410088 master-0 kubenswrapper[7476]: I0320 08:44:51.409992 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerDied","Data":"4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4"} Mar 20 08:44:51.410158 master-0 kubenswrapper[7476]: I0320 08:44:51.410085 7476 scope.go:117] "RemoveContainer" containerID="c5f00c0d77211fa7340df0b5c9e4c67e0a0eeb68e81ac9de5effbf2d875c406e" Mar 20 08:44:51.411619 master-0 kubenswrapper[7476]: I0320 08:44:51.411571 7476 scope.go:117] "RemoveContainer" containerID="4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4" Mar 20 08:44:51.415470 master-0 kubenswrapper[7476]: E0320 08:44:51.414489 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"approver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=approver pod=network-node-identity-dq29v_openshift-network-node-identity(9d653bfa-7168-49fa-a838-aedb33c7e60f)\"" pod="openshift-network-node-identity/network-node-identity-dq29v" podUID="9d653bfa-7168-49fa-a838-aedb33c7e60f" Mar 20 08:44:52.014115 master-0 kubenswrapper[7476]: I0320 08:44:52.013921 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:44:52.018310 master-0 kubenswrapper[7476]: I0320 08:44:52.018187 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:52.018310 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:52.018310 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:52.018310 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:52.018661 master-0 kubenswrapper[7476]: I0320 08:44:52.018316 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:52.420468 master-0 kubenswrapper[7476]: I0320 08:44:52.420418 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/1.log" Mar 20 08:44:53.016911 master-0 kubenswrapper[7476]: I0320 08:44:53.016772 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:53.016911 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:53.016911 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:53.016911 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:53.017483 master-0 kubenswrapper[7476]: I0320 08:44:53.016910 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:54.018961 master-0 kubenswrapper[7476]: I0320 08:44:54.018832 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:54.018961 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:54.018961 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:54.018961 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:54.019974 master-0 kubenswrapper[7476]: I0320 08:44:54.018962 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:55.017028 master-0 kubenswrapper[7476]: I0320 08:44:55.016934 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:55.017028 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:55.017028 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:55.017028 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:55.017554 master-0 kubenswrapper[7476]: I0320 08:44:55.017034 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:56.018634 master-0 kubenswrapper[7476]: I0320 08:44:56.018527 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:56.018634 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:56.018634 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:56.018634 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:56.019378 master-0 kubenswrapper[7476]: I0320 08:44:56.018641 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: E0320 08:44:56.337791 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: &Event{ObjectMeta:{router-default-7dcf5569b5-kvmtp.189e7fec0bc5f5a1 openshift-ingress 10649 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-ingress,Name:router-default-7dcf5569b5-kvmtp,UID:e89571b2-098c-495b-9b53-c4ebd95296ab,APIVersion:v1,ResourceVersion:10160,FieldPath:spec.containers{router},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: body: [-]backend-http failed: reason withheld Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:39:15 +0000 UTC,LastTimestamp:2026-03-20 08:43:49.017766169 +0000 UTC m=+509.986534735,Count:229,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 20 08:44:56.338026 master-0 kubenswrapper[7476]: > Mar 20 08:44:57.017813 master-0 kubenswrapper[7476]: I0320 08:44:57.017720 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:57.017813 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:57.017813 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:57.017813 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:57.018440 master-0 kubenswrapper[7476]: I0320 08:44:57.017830 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:58.017201 master-0 kubenswrapper[7476]: I0320 08:44:58.017135 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:58.017201 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:58.017201 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:58.017201 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:58.017201 master-0 kubenswrapper[7476]: I0320 08:44:58.017190 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:44:58.769650 master-0 kubenswrapper[7476]: E0320 08:44:58.769129 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 20 08:44:59.017551 master-0 kubenswrapper[7476]: I0320 08:44:59.017466 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:44:59.017551 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:44:59.017551 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:44:59.017551 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:44:59.018606 master-0 kubenswrapper[7476]: I0320 08:44:59.017568 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:00.018151 master-0 kubenswrapper[7476]: I0320 08:45:00.018038 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:00.018151 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:00.018151 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:00.018151 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:00.018764 master-0 kubenswrapper[7476]: I0320 08:45:00.018736 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:01.031189 master-0 kubenswrapper[7476]: I0320 08:45:01.031105 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:01.031189 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:01.031189 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:01.031189 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:01.031189 master-0 kubenswrapper[7476]: I0320 08:45:01.031180 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:02.016743 master-0 kubenswrapper[7476]: I0320 08:45:02.016675 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:02.016743 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:02.016743 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:02.016743 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:02.017071 master-0 kubenswrapper[7476]: I0320 08:45:02.016752 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:02.353287 master-0 kubenswrapper[7476]: E0320 08:45:02.353142 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:44:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:44:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:44:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:44:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:45:03.017747 master-0 kubenswrapper[7476]: I0320 08:45:03.017539 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:03.017747 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:03.017747 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:03.017747 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:03.018067 master-0 kubenswrapper[7476]: I0320 08:45:03.017800 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:04.017603 master-0 kubenswrapper[7476]: I0320 08:45:04.017549 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:04.017603 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:04.017603 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:04.017603 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:04.018549 master-0 kubenswrapper[7476]: I0320 08:45:04.018431 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:04.259572 master-0 kubenswrapper[7476]: E0320 08:45:04.259528 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:45:04.260156 master-0 kubenswrapper[7476]: I0320 08:45:04.260140 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 20 08:45:04.282476 master-0 kubenswrapper[7476]: W0320 08:45:04.282431 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod094204df314fe45bd5af12ca1b4622bb.slice/crio-3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947 WatchSource:0}: Error finding container 3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947: Status 404 returned error can't find the container with id 3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947 Mar 20 08:45:04.525485 master-0 kubenswrapper[7476]: I0320 08:45:04.525345 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947"} Mar 20 08:45:05.018496 master-0 kubenswrapper[7476]: I0320 08:45:05.018404 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:05.018496 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:05.018496 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:05.018496 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:05.019565 master-0 kubenswrapper[7476]: I0320 08:45:05.018511 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:05.535295 master-0 kubenswrapper[7476]: I0320 08:45:05.535217 7476 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="89681a264aa64084b2aa38ba642cb89ce6a4bb719fa716689bf3853f8249b887" exitCode=0 Mar 20 08:45:05.535295 master-0 kubenswrapper[7476]: I0320 08:45:05.535296 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"89681a264aa64084b2aa38ba642cb89ce6a4bb719fa716689bf3853f8249b887"} Mar 20 08:45:05.535883 master-0 kubenswrapper[7476]: I0320 08:45:05.535816 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:45:05.535883 master-0 kubenswrapper[7476]: I0320 08:45:05.535875 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:45:06.017361 master-0 kubenswrapper[7476]: I0320 08:45:06.017280 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:06.017361 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:06.017361 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:06.017361 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:06.017819 master-0 kubenswrapper[7476]: I0320 08:45:06.017368 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:06.236829 master-0 kubenswrapper[7476]: I0320 08:45:06.236788 7476 scope.go:117] "RemoveContainer" containerID="4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4" Mar 20 08:45:06.545963 master-0 kubenswrapper[7476]: I0320 08:45:06.545816 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_5cdd5ac8-4c2e-4680-b697-0e5d94136fe4/installer/0.log" Mar 20 08:45:06.545963 master-0 kubenswrapper[7476]: I0320 08:45:06.545894 7476 generic.go:334] "Generic (PLEG): container finished" podID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerID="6431ba0942f1d93ec67e79edabc01c308dcb065395ccf7185622d3bd7f0075b2" exitCode=1 Mar 20 08:45:06.546185 master-0 kubenswrapper[7476]: I0320 08:45:06.546000 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4","Type":"ContainerDied","Data":"6431ba0942f1d93ec67e79edabc01c308dcb065395ccf7185622d3bd7f0075b2"} Mar 20 08:45:06.548374 master-0 kubenswrapper[7476]: I0320 08:45:06.548319 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/1.log" Mar 20 08:45:06.548846 master-0 kubenswrapper[7476]: I0320 08:45:06.548799 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"db34596c0384185b8be14345b1286cd07c682e48ceb08781c98125811bf47060"} Mar 20 08:45:07.016740 master-0 kubenswrapper[7476]: I0320 08:45:07.016644 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:07.016740 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:07.016740 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:07.016740 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:07.017020 master-0 kubenswrapper[7476]: I0320 08:45:07.016799 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:07.851677 master-0 kubenswrapper[7476]: I0320 08:45:07.851593 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_5cdd5ac8-4c2e-4680-b697-0e5d94136fe4/installer/0.log" Mar 20 08:45:07.852531 master-0 kubenswrapper[7476]: I0320 08:45:07.851703 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:45:07.979810 master-0 kubenswrapper[7476]: I0320 08:45:07.979691 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kubelet-dir\") pod \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " Mar 20 08:45:07.980057 master-0 kubenswrapper[7476]: I0320 08:45:07.979811 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" (UID: "5cdd5ac8-4c2e-4680-b697-0e5d94136fe4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:45:07.980057 master-0 kubenswrapper[7476]: I0320 08:45:07.979984 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-var-lock\") pod \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " Mar 20 08:45:07.980218 master-0 kubenswrapper[7476]: I0320 08:45:07.980054 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kube-api-access\") pod \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\" (UID: \"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4\") " Mar 20 08:45:07.980218 master-0 kubenswrapper[7476]: I0320 08:45:07.980130 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-var-lock" (OuterVolumeSpecName: "var-lock") pod "5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" (UID: "5cdd5ac8-4c2e-4680-b697-0e5d94136fe4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:45:07.980671 master-0 kubenswrapper[7476]: I0320 08:45:07.980617 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:45:07.980753 master-0 kubenswrapper[7476]: I0320 08:45:07.980674 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:45:07.984789 master-0 kubenswrapper[7476]: I0320 08:45:07.984717 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" (UID: "5cdd5ac8-4c2e-4680-b697-0e5d94136fe4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:45:08.017431 master-0 kubenswrapper[7476]: I0320 08:45:08.017380 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:08.017431 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:08.017431 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:08.017431 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:08.017717 master-0 kubenswrapper[7476]: I0320 08:45:08.017449 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:08.082410 master-0 kubenswrapper[7476]: I0320 08:45:08.082249 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5cdd5ac8-4c2e-4680-b697-0e5d94136fe4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:45:08.565529 master-0 kubenswrapper[7476]: I0320 08:45:08.565449 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_5cdd5ac8-4c2e-4680-b697-0e5d94136fe4/installer/0.log" Mar 20 08:45:08.565801 master-0 kubenswrapper[7476]: I0320 08:45:08.565554 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4","Type":"ContainerDied","Data":"4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861"} Mar 20 08:45:08.565801 master-0 kubenswrapper[7476]: I0320 08:45:08.565602 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861" Mar 20 08:45:08.565801 master-0 kubenswrapper[7476]: I0320 08:45:08.565645 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:45:09.018103 master-0 kubenswrapper[7476]: I0320 08:45:09.018010 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:09.018103 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:09.018103 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:09.018103 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:09.018891 master-0 kubenswrapper[7476]: I0320 08:45:09.018104 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:09.170378 master-0 kubenswrapper[7476]: E0320 08:45:09.170229 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="800ms" Mar 20 08:45:10.016991 master-0 kubenswrapper[7476]: I0320 08:45:10.016868 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:10.016991 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:10.016991 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:10.016991 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:10.017328 master-0 kubenswrapper[7476]: I0320 08:45:10.017298 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:11.018473 master-0 kubenswrapper[7476]: I0320 08:45:11.018397 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:11.018473 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:11.018473 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:11.018473 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:11.019187 master-0 kubenswrapper[7476]: I0320 08:45:11.018486 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:12.017670 master-0 kubenswrapper[7476]: I0320 08:45:12.017564 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:12.017670 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:12.017670 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:12.017670 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:12.018130 master-0 kubenswrapper[7476]: I0320 08:45:12.017675 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:12.354069 master-0 kubenswrapper[7476]: E0320 08:45:12.353976 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:45:13.017749 master-0 kubenswrapper[7476]: I0320 08:45:13.017672 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:13.017749 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:13.017749 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:13.017749 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:13.017749 master-0 kubenswrapper[7476]: I0320 08:45:13.017747 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:14.017729 master-0 kubenswrapper[7476]: I0320 08:45:14.017671 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:14.017729 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:14.017729 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:14.017729 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:14.018559 master-0 kubenswrapper[7476]: I0320 08:45:14.018522 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:15.016175 master-0 kubenswrapper[7476]: I0320 08:45:15.016073 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:15.016175 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:15.016175 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:15.016175 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:15.016581 master-0 kubenswrapper[7476]: I0320 08:45:15.016552 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:16.017232 master-0 kubenswrapper[7476]: I0320 08:45:16.017118 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:16.017232 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:16.017232 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:16.017232 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:16.017232 master-0 kubenswrapper[7476]: I0320 08:45:16.017223 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:17.017377 master-0 kubenswrapper[7476]: I0320 08:45:17.017283 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:17.017377 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:17.017377 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:17.017377 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:17.017926 master-0 kubenswrapper[7476]: I0320 08:45:17.017379 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:18.016896 master-0 kubenswrapper[7476]: I0320 08:45:18.016829 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:18.016896 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:18.016896 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:18.016896 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:18.017595 master-0 kubenswrapper[7476]: I0320 08:45:18.017467 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:19.016787 master-0 kubenswrapper[7476]: I0320 08:45:19.016720 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:19.016787 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:19.016787 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:19.016787 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:19.016787 master-0 kubenswrapper[7476]: I0320 08:45:19.016783 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:19.972150 master-0 kubenswrapper[7476]: E0320 08:45:19.971883 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 20 08:45:20.017529 master-0 kubenswrapper[7476]: I0320 08:45:20.017408 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:20.017529 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:20.017529 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:20.017529 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:20.017529 master-0 kubenswrapper[7476]: I0320 08:45:20.017533 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:21.017944 master-0 kubenswrapper[7476]: I0320 08:45:21.017861 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:21.017944 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:21.017944 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:21.017944 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:21.017944 master-0 kubenswrapper[7476]: I0320 08:45:21.017934 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:22.017137 master-0 kubenswrapper[7476]: I0320 08:45:22.017087 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:22.017137 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:22.017137 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:22.017137 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:22.017556 master-0 kubenswrapper[7476]: I0320 08:45:22.017520 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:22.354709 master-0 kubenswrapper[7476]: E0320 08:45:22.354310 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:45:23.017340 master-0 kubenswrapper[7476]: I0320 08:45:23.017136 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:23.017340 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:23.017340 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:23.017340 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:23.017340 master-0 kubenswrapper[7476]: I0320 08:45:23.017195 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:24.017011 master-0 kubenswrapper[7476]: I0320 08:45:24.016925 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:24.017011 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:24.017011 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:24.017011 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:24.018103 master-0 kubenswrapper[7476]: I0320 08:45:24.017025 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:25.017579 master-0 kubenswrapper[7476]: I0320 08:45:25.017481 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:25.017579 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:25.017579 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:25.017579 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:25.018775 master-0 kubenswrapper[7476]: I0320 08:45:25.017610 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:26.017886 master-0 kubenswrapper[7476]: I0320 08:45:26.017803 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:26.017886 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:26.017886 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:26.017886 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:26.017886 master-0 kubenswrapper[7476]: I0320 08:45:26.017867 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:27.017387 master-0 kubenswrapper[7476]: I0320 08:45:27.017298 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:27.017387 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:27.017387 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:27.017387 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:27.017864 master-0 kubenswrapper[7476]: I0320 08:45:27.017402 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:27.366041 master-0 kubenswrapper[7476]: E0320 08:45:27.365954 7476 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23003a2f_2053_47cc_8133_23eb886d4da0.slice/crio-conmon-a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:45:27.714099 master-0 kubenswrapper[7476]: I0320 08:45:27.714024 7476 generic.go:334] "Generic (PLEG): container finished" podID="23003a2f-2053-47cc-8133-23eb886d4da0" containerID="a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb" exitCode=0 Mar 20 08:45:27.714483 master-0 kubenswrapper[7476]: I0320 08:45:27.714147 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerDied","Data":"a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb"} Mar 20 08:45:27.714676 master-0 kubenswrapper[7476]: I0320 08:45:27.714652 7476 scope.go:117] "RemoveContainer" containerID="cc3c2a9c1f06758b9cf8e7a0bffe7eec7cabce777c5e4901ed4f712103ea4ff6" Mar 20 08:45:27.715449 master-0 kubenswrapper[7476]: I0320 08:45:27.715399 7476 scope.go:117] "RemoveContainer" containerID="a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb" Mar 20 08:45:27.717332 master-0 kubenswrapper[7476]: E0320 08:45:27.716481 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-j84r8_openshift-marketplace(23003a2f-2053-47cc-8133-23eb886d4da0)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" podUID="23003a2f-2053-47cc-8133-23eb886d4da0" Mar 20 08:45:28.017455 master-0 kubenswrapper[7476]: I0320 08:45:28.017244 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:28.017455 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:28.017455 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:28.017455 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:28.017455 master-0 kubenswrapper[7476]: I0320 08:45:28.017365 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:29.017600 master-0 kubenswrapper[7476]: I0320 08:45:29.017477 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:29.017600 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:29.017600 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:29.017600 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:29.017600 master-0 kubenswrapper[7476]: I0320 08:45:29.017586 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:30.018254 master-0 kubenswrapper[7476]: I0320 08:45:30.018039 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:30.018254 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:30.018254 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:30.018254 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:30.018254 master-0 kubenswrapper[7476]: I0320 08:45:30.018130 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:30.343096 master-0 kubenswrapper[7476]: E0320 08:45:30.342860 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e7fe4c727664a openshift-kube-controller-manager 9564 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:8c753d068f364b16e3aeb8396b7d9f33,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:38:43 +0000 UTC,LastTimestamp:2026-03-20 08:44:01.943578671 +0000 UTC m=+522.912347227,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:45:31.017301 master-0 kubenswrapper[7476]: I0320 08:45:31.017206 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:31.017301 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:31.017301 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:31.017301 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:31.018057 master-0 kubenswrapper[7476]: I0320 08:45:31.017317 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:31.573786 master-0 kubenswrapper[7476]: E0320 08:45:31.573613 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 20 08:45:32.017345 master-0 kubenswrapper[7476]: I0320 08:45:32.017213 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:32.017345 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:32.017345 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:32.017345 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:32.017885 master-0 kubenswrapper[7476]: I0320 08:45:32.017348 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:32.276676 master-0 kubenswrapper[7476]: I0320 08:45:32.276508 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:45:32.277448 master-0 kubenswrapper[7476]: I0320 08:45:32.277256 7476 scope.go:117] "RemoveContainer" containerID="a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb" Mar 20 08:45:32.277750 master-0 kubenswrapper[7476]: E0320 08:45:32.277689 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-j84r8_openshift-marketplace(23003a2f-2053-47cc-8133-23eb886d4da0)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" podUID="23003a2f-2053-47cc-8133-23eb886d4da0" Mar 20 08:45:32.281365 master-0 kubenswrapper[7476]: I0320 08:45:32.281003 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:45:32.355491 master-0 kubenswrapper[7476]: E0320 08:45:32.355403 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:45:32.761369 master-0 kubenswrapper[7476]: I0320 08:45:32.760690 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/3.log" Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: I0320 08:45:32.761761 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/2.log" Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: I0320 08:45:32.762526 7476 generic.go:334] "Generic (PLEG): container finished" podID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" exitCode=1 Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: I0320 08:45:32.763762 7476 scope.go:117] "RemoveContainer" containerID="a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb" Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: E0320 08:45:32.764257 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-89ccd998f-j84r8_openshift-marketplace(23003a2f-2053-47cc-8133-23eb886d4da0)\"" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" podUID="23003a2f-2053-47cc-8133-23eb886d4da0" Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: I0320 08:45:32.764345 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerDied","Data":"4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77"} Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: I0320 08:45:32.764422 7476 scope.go:117] "RemoveContainer" containerID="7ebddc1f5af1710df13bf7d77f32dd790f89b2180d3fbd95cea82683956f88f2" Mar 20 08:45:32.765965 master-0 kubenswrapper[7476]: I0320 08:45:32.765849 7476 scope.go:117] "RemoveContainer" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" Mar 20 08:45:32.767817 master-0 kubenswrapper[7476]: E0320 08:45:32.767751 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:45:33.016726 master-0 kubenswrapper[7476]: I0320 08:45:33.016527 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:33.016726 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:33.016726 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:33.016726 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:33.016726 master-0 kubenswrapper[7476]: I0320 08:45:33.016634 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:33.772108 master-0 kubenswrapper[7476]: I0320 08:45:33.772020 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/3.log" Mar 20 08:45:34.017338 master-0 kubenswrapper[7476]: I0320 08:45:34.017196 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:34.017338 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:34.017338 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:34.017338 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:34.017785 master-0 kubenswrapper[7476]: I0320 08:45:34.017346 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:35.018622 master-0 kubenswrapper[7476]: I0320 08:45:35.018504 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:35.018622 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:35.018622 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:35.018622 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:35.018622 master-0 kubenswrapper[7476]: I0320 08:45:35.018588 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:36.017097 master-0 kubenswrapper[7476]: I0320 08:45:36.016989 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:36.017097 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:36.017097 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:36.017097 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:36.017597 master-0 kubenswrapper[7476]: I0320 08:45:36.017116 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:37.017053 master-0 kubenswrapper[7476]: I0320 08:45:37.016964 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:37.017053 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:37.017053 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:37.017053 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:37.017967 master-0 kubenswrapper[7476]: I0320 08:45:37.017054 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:38.018495 master-0 kubenswrapper[7476]: I0320 08:45:38.018372 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:38.018495 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:38.018495 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:38.018495 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:38.019645 master-0 kubenswrapper[7476]: I0320 08:45:38.018484 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:39.017246 master-0 kubenswrapper[7476]: I0320 08:45:39.017128 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:39.017246 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:39.017246 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:39.017246 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:39.017736 master-0 kubenswrapper[7476]: I0320 08:45:39.017256 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:39.540041 master-0 kubenswrapper[7476]: E0320 08:45:39.539924 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:45:40.016566 master-0 kubenswrapper[7476]: I0320 08:45:40.016484 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:40.016566 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:40.016566 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:40.016566 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:40.016566 master-0 kubenswrapper[7476]: I0320 08:45:40.016562 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:40.828477 master-0 kubenswrapper[7476]: I0320 08:45:40.828381 7476 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="d88e93757522ae39b5517291f3c06f1dd6bd6427800d2bd825b8a5c55305f18d" exitCode=0 Mar 20 08:45:40.828477 master-0 kubenswrapper[7476]: I0320 08:45:40.828454 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"d88e93757522ae39b5517291f3c06f1dd6bd6427800d2bd825b8a5c55305f18d"} Mar 20 08:45:40.829899 master-0 kubenswrapper[7476]: I0320 08:45:40.829004 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:45:40.829899 master-0 kubenswrapper[7476]: I0320 08:45:40.829057 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:45:41.016966 master-0 kubenswrapper[7476]: I0320 08:45:41.016840 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:41.016966 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:41.016966 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:41.016966 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:41.017566 master-0 kubenswrapper[7476]: I0320 08:45:41.016993 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:42.016484 master-0 kubenswrapper[7476]: I0320 08:45:42.016432 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:42.016484 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:42.016484 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:42.016484 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:42.017585 master-0 kubenswrapper[7476]: I0320 08:45:42.017505 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:42.356291 master-0 kubenswrapper[7476]: E0320 08:45:42.355940 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:45:42.356291 master-0 kubenswrapper[7476]: E0320 08:45:42.356301 7476 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:45:43.016482 master-0 kubenswrapper[7476]: I0320 08:45:43.016434 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:43.016482 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:43.016482 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:43.016482 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:43.017003 master-0 kubenswrapper[7476]: I0320 08:45:43.016497 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:44.020403 master-0 kubenswrapper[7476]: I0320 08:45:44.017614 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:44.020403 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:44.020403 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:44.020403 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:44.020403 master-0 kubenswrapper[7476]: I0320 08:45:44.017681 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:44.237363 master-0 kubenswrapper[7476]: I0320 08:45:44.237129 7476 scope.go:117] "RemoveContainer" containerID="a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb" Mar 20 08:45:44.774794 master-0 kubenswrapper[7476]: E0320 08:45:44.774672 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 20 08:45:44.860214 master-0 kubenswrapper[7476]: I0320 08:45:44.860140 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerStarted","Data":"d5a6da92a647ffc6da1361e5e0378499aaed4a31f29ea5931a4731314e925480"} Mar 20 08:45:44.860638 master-0 kubenswrapper[7476]: I0320 08:45:44.860620 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:45:44.863483 master-0 kubenswrapper[7476]: I0320 08:45:44.863441 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/1.log" Mar 20 08:45:44.865640 master-0 kubenswrapper[7476]: I0320 08:45:44.865528 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/0.log" Mar 20 08:45:44.866356 master-0 kubenswrapper[7476]: I0320 08:45:44.866260 7476 generic.go:334] "Generic (PLEG): container finished" podID="08d9196b-b68f-421b-8754-bfbaa4020a97" containerID="ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449" exitCode=1 Mar 20 08:45:44.866567 master-0 kubenswrapper[7476]: I0320 08:45:44.866395 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerDied","Data":"ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449"} Mar 20 08:45:44.866567 master-0 kubenswrapper[7476]: I0320 08:45:44.866484 7476 scope.go:117] "RemoveContainer" containerID="2c241c5cc1bda01a54d125786cac6f467e2e7cd45da3764b80c745165babdd10" Mar 20 08:45:44.867937 master-0 kubenswrapper[7476]: I0320 08:45:44.867843 7476 scope.go:117] "RemoveContainer" containerID="ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449" Mar 20 08:45:44.868661 master-0 kubenswrapper[7476]: E0320 08:45:44.868597 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-tf2gj_openshift-catalogd(08d9196b-b68f-421b-8754-bfbaa4020a97)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" podUID="08d9196b-b68f-421b-8754-bfbaa4020a97" Mar 20 08:45:44.870517 master-0 kubenswrapper[7476]: I0320 08:45:44.870462 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:45:45.015956 master-0 kubenswrapper[7476]: I0320 08:45:45.015885 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:45.015956 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:45.015956 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:45.015956 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:45.016524 master-0 kubenswrapper[7476]: I0320 08:45:45.016483 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:45.237027 master-0 kubenswrapper[7476]: I0320 08:45:45.236698 7476 scope.go:117] "RemoveContainer" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" Mar 20 08:45:45.237027 master-0 kubenswrapper[7476]: E0320 08:45:45.236964 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:45:45.878128 master-0 kubenswrapper[7476]: I0320 08:45:45.878065 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/1.log" Mar 20 08:45:46.016494 master-0 kubenswrapper[7476]: I0320 08:45:46.016363 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:46.016494 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:46.016494 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:46.016494 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:46.016494 master-0 kubenswrapper[7476]: I0320 08:45:46.016427 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:47.016575 master-0 kubenswrapper[7476]: I0320 08:45:47.016504 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:47.016575 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:47.016575 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:47.016575 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:47.017547 master-0 kubenswrapper[7476]: I0320 08:45:47.016590 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:48.017169 master-0 kubenswrapper[7476]: I0320 08:45:48.017045 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:48.017169 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:48.017169 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:48.017169 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:48.017169 master-0 kubenswrapper[7476]: I0320 08:45:48.017120 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:49.017597 master-0 kubenswrapper[7476]: I0320 08:45:49.017544 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:49.017597 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:49.017597 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:49.017597 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:49.018782 master-0 kubenswrapper[7476]: I0320 08:45:49.018734 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:49.246854 master-0 kubenswrapper[7476]: I0320 08:45:49.246750 7476 status_manager.go:851] "Failed to get status for pod" podUID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-retry-1-master-0)" Mar 20 08:45:50.017594 master-0 kubenswrapper[7476]: I0320 08:45:50.017453 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:50.017594 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:50.017594 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:50.017594 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:50.017594 master-0 kubenswrapper[7476]: I0320 08:45:50.017550 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:51.016938 master-0 kubenswrapper[7476]: I0320 08:45:51.016852 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:51.016938 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:51.016938 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:51.016938 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:51.016938 master-0 kubenswrapper[7476]: I0320 08:45:51.016920 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:52.016778 master-0 kubenswrapper[7476]: I0320 08:45:52.016684 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:52.016778 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:52.016778 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:52.016778 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:52.017901 master-0 kubenswrapper[7476]: I0320 08:45:52.016780 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:52.991248 master-0 kubenswrapper[7476]: I0320 08:45:52.991131 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:45:52.992352 master-0 kubenswrapper[7476]: I0320 08:45:52.992258 7476 scope.go:117] "RemoveContainer" containerID="ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449" Mar 20 08:45:52.992755 master-0 kubenswrapper[7476]: E0320 08:45:52.992696 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=catalogd-controller-manager-6864dc98f7-tf2gj_openshift-catalogd(08d9196b-b68f-421b-8754-bfbaa4020a97)\"" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" podUID="08d9196b-b68f-421b-8754-bfbaa4020a97" Mar 20 08:45:53.016596 master-0 kubenswrapper[7476]: I0320 08:45:53.016501 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:53.016596 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:53.016596 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:53.016596 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:53.016596 master-0 kubenswrapper[7476]: I0320 08:45:53.016586 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:53.945834 master-0 kubenswrapper[7476]: I0320 08:45:53.945758 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/0.log" Mar 20 08:45:53.946386 master-0 kubenswrapper[7476]: I0320 08:45:53.946331 7476 generic.go:334] "Generic (PLEG): container finished" podID="6163bd4b-dc83-4e83-8590-5ac4753bda1c" containerID="8318704eaa08899e772deabe42128ea1b882f7234facbd87ca64f6d3f0952a1a" exitCode=1 Mar 20 08:45:53.946504 master-0 kubenswrapper[7476]: I0320 08:45:53.946396 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerDied","Data":"8318704eaa08899e772deabe42128ea1b882f7234facbd87ca64f6d3f0952a1a"} Mar 20 08:45:53.947221 master-0 kubenswrapper[7476]: I0320 08:45:53.947166 7476 scope.go:117] "RemoveContainer" containerID="8318704eaa08899e772deabe42128ea1b882f7234facbd87ca64f6d3f0952a1a" Mar 20 08:45:54.017532 master-0 kubenswrapper[7476]: I0320 08:45:54.017473 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:54.017532 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:54.017532 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:54.017532 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:54.018444 master-0 kubenswrapper[7476]: I0320 08:45:54.017552 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:54.959428 master-0 kubenswrapper[7476]: I0320 08:45:54.959248 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/0.log" Mar 20 08:45:54.960411 master-0 kubenswrapper[7476]: I0320 08:45:54.960342 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"612932528a87c37595e1e39af79797bdd3b69b1320150874acd6b11a8312b742"} Mar 20 08:45:55.017167 master-0 kubenswrapper[7476]: I0320 08:45:55.017083 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:55.017167 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:55.017167 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:55.017167 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:55.017167 master-0 kubenswrapper[7476]: I0320 08:45:55.017160 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:56.017115 master-0 kubenswrapper[7476]: I0320 08:45:56.017060 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:56.017115 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:56.017115 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:56.017115 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:56.017115 master-0 kubenswrapper[7476]: I0320 08:45:56.017117 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:56.984460 master-0 kubenswrapper[7476]: I0320 08:45:56.984384 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/1.log" Mar 20 08:45:56.986993 master-0 kubenswrapper[7476]: I0320 08:45:56.986926 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/0.log" Mar 20 08:45:56.987193 master-0 kubenswrapper[7476]: I0320 08:45:56.987129 7476 generic.go:334] "Generic (PLEG): container finished" podID="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" containerID="fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b" exitCode=1 Mar 20 08:45:56.987385 master-0 kubenswrapper[7476]: I0320 08:45:56.987200 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerDied","Data":"fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b"} Mar 20 08:45:56.987385 master-0 kubenswrapper[7476]: I0320 08:45:56.987358 7476 scope.go:117] "RemoveContainer" containerID="853a1945138b3e0ff5252845780fd6a6c7275529314ebd23a219d848ce919728" Mar 20 08:45:56.988309 master-0 kubenswrapper[7476]: I0320 08:45:56.988220 7476 scope.go:117] "RemoveContainer" containerID="fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b" Mar 20 08:45:56.988862 master-0 kubenswrapper[7476]: E0320 08:45:56.988775 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-rnnfz_openshift-operator-controller(bb7b640f-22be-41a9-8ab2-e7ae817e2eb0)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" podUID="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" Mar 20 08:45:57.018455 master-0 kubenswrapper[7476]: I0320 08:45:57.018375 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:57.018455 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:57.018455 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:57.018455 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:57.019250 master-0 kubenswrapper[7476]: I0320 08:45:57.018462 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:58.000759 master-0 kubenswrapper[7476]: I0320 08:45:58.000690 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/0.log" Mar 20 08:45:58.001601 master-0 kubenswrapper[7476]: I0320 08:45:58.001568 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/cluster-cloud-controller-manager/0.log" Mar 20 08:45:58.001822 master-0 kubenswrapper[7476]: I0320 08:45:58.001781 7476 generic.go:334] "Generic (PLEG): container finished" podID="6163bd4b-dc83-4e83-8590-5ac4753bda1c" containerID="52016baf23be09eb560f695ee764aa3c366d61ff1792a482aac5922ed083323d" exitCode=1 Mar 20 08:45:58.002039 master-0 kubenswrapper[7476]: I0320 08:45:58.001893 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerDied","Data":"52016baf23be09eb560f695ee764aa3c366d61ff1792a482aac5922ed083323d"} Mar 20 08:45:58.002766 master-0 kubenswrapper[7476]: I0320 08:45:58.002738 7476 scope.go:117] "RemoveContainer" containerID="52016baf23be09eb560f695ee764aa3c366d61ff1792a482aac5922ed083323d" Mar 20 08:45:58.005696 master-0 kubenswrapper[7476]: I0320 08:45:58.005649 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/1.log" Mar 20 08:45:58.017189 master-0 kubenswrapper[7476]: I0320 08:45:58.017113 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:58.017189 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:58.017189 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:58.017189 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:58.017522 master-0 kubenswrapper[7476]: I0320 08:45:58.017196 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:58.236200 master-0 kubenswrapper[7476]: I0320 08:45:58.236172 7476 scope.go:117] "RemoveContainer" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" Mar 20 08:45:58.237076 master-0 kubenswrapper[7476]: E0320 08:45:58.237050 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:45:59.016976 master-0 kubenswrapper[7476]: I0320 08:45:59.016875 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:45:59.016976 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:45:59.016976 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:45:59.016976 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:45:59.017766 master-0 kubenswrapper[7476]: I0320 08:45:59.016969 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:45:59.019391 master-0 kubenswrapper[7476]: I0320 08:45:59.019336 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/0.log" Mar 20 08:45:59.019950 master-0 kubenswrapper[7476]: I0320 08:45:59.019901 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/cluster-cloud-controller-manager/0.log" Mar 20 08:45:59.020073 master-0 kubenswrapper[7476]: I0320 08:45:59.019960 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"f424e7513ec0e0aa5fffe62eeb72b57d6bda17b11a356b4440d33b066290beeb"} Mar 20 08:46:00.018004 master-0 kubenswrapper[7476]: I0320 08:46:00.017821 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:00.018004 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:00.018004 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:00.018004 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:00.018004 master-0 kubenswrapper[7476]: I0320 08:46:00.017909 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:01.017398 master-0 kubenswrapper[7476]: I0320 08:46:01.017187 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:01.017398 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:01.017398 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:01.017398 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:01.017398 master-0 kubenswrapper[7476]: I0320 08:46:01.017334 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:01.040053 master-0 kubenswrapper[7476]: I0320 08:46:01.039927 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/1.log" Mar 20 08:46:01.041578 master-0 kubenswrapper[7476]: I0320 08:46:01.041542 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/0.log" Mar 20 08:46:01.041798 master-0 kubenswrapper[7476]: I0320 08:46:01.041761 7476 generic.go:334] "Generic (PLEG): container finished" podID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" containerID="5997f3136ed0533c039dd0e30c51e5f693f57ff7a1981a4e954ad4ffb2ba2c02" exitCode=1 Mar 20 08:46:01.041933 master-0 kubenswrapper[7476]: I0320 08:46:01.041864 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerDied","Data":"5997f3136ed0533c039dd0e30c51e5f693f57ff7a1981a4e954ad4ffb2ba2c02"} Mar 20 08:46:01.042094 master-0 kubenswrapper[7476]: I0320 08:46:01.042071 7476 scope.go:117] "RemoveContainer" containerID="3165ad3f4e3423cb37420a9aeda1215c8c5bbcc445272eb7b11a146edfa5a4f0" Mar 20 08:46:01.043076 master-0 kubenswrapper[7476]: I0320 08:46:01.043041 7476 scope.go:117] "RemoveContainer" containerID="5997f3136ed0533c039dd0e30c51e5f693f57ff7a1981a4e954ad4ffb2ba2c02" Mar 20 08:46:01.043723 master-0 kubenswrapper[7476]: E0320 08:46:01.043682 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:46:01.176254 master-0 kubenswrapper[7476]: E0320 08:46:01.176169 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:46:02.017347 master-0 kubenswrapper[7476]: I0320 08:46:02.017247 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:02.017347 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:02.017347 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:02.017347 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:02.017847 master-0 kubenswrapper[7476]: I0320 08:46:02.017364 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:02.053039 master-0 kubenswrapper[7476]: I0320 08:46:02.052990 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/1.log" Mar 20 08:46:02.415443 master-0 kubenswrapper[7476]: E0320 08:46:02.415324 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:45:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:45:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:45:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:45:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:02.991031 master-0 kubenswrapper[7476]: I0320 08:46:02.990897 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:46:02.991908 master-0 kubenswrapper[7476]: I0320 08:46:02.991861 7476 scope.go:117] "RemoveContainer" containerID="ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449" Mar 20 08:46:03.018740 master-0 kubenswrapper[7476]: I0320 08:46:03.018422 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:03.018740 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:03.018740 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:03.018740 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:03.019199 master-0 kubenswrapper[7476]: I0320 08:46:03.019034 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:03.488477 master-0 kubenswrapper[7476]: I0320 08:46:03.488366 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:46:03.488477 master-0 kubenswrapper[7476]: I0320 08:46:03.488490 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:46:03.489524 master-0 kubenswrapper[7476]: I0320 08:46:03.489485 7476 scope.go:117] "RemoveContainer" containerID="fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b" Mar 20 08:46:03.489836 master-0 kubenswrapper[7476]: E0320 08:46:03.489785 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=manager pod=operator-controller-controller-manager-57777556ff-rnnfz_openshift-operator-controller(bb7b640f-22be-41a9-8ab2-e7ae817e2eb0)\"" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" podUID="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" Mar 20 08:46:04.017670 master-0 kubenswrapper[7476]: I0320 08:46:04.017585 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:04.017670 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:04.017670 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:04.017670 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:04.018165 master-0 kubenswrapper[7476]: I0320 08:46:04.017684 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:04.075528 master-0 kubenswrapper[7476]: I0320 08:46:04.075376 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/1.log" Mar 20 08:46:04.076173 master-0 kubenswrapper[7476]: I0320 08:46:04.076110 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"fddcb721d2d443762155607f4a14cad4d4ea3bdb47b65fbf890c05cb02cccdd7"} Mar 20 08:46:04.076701 master-0 kubenswrapper[7476]: I0320 08:46:04.076623 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:46:04.346508 master-0 kubenswrapper[7476]: E0320 08:46:04.346243 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e7fe4d4d7af1c openshift-kube-controller-manager 9569 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:8c753d068f364b16e3aeb8396b7d9f33,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:38:44 +0000 UTC,LastTimestamp:2026-03-20 08:44:02.232830144 +0000 UTC m=+523.201598710,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:46:05.018306 master-0 kubenswrapper[7476]: I0320 08:46:05.017937 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:05.018306 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:05.018306 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:05.018306 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:05.018306 master-0 kubenswrapper[7476]: I0320 08:46:05.018056 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:06.018486 master-0 kubenswrapper[7476]: I0320 08:46:06.018414 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:06.018486 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:06.018486 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:06.018486 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:06.018486 master-0 kubenswrapper[7476]: I0320 08:46:06.018470 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:07.017474 master-0 kubenswrapper[7476]: I0320 08:46:07.016999 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:07.017474 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:07.017474 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:07.017474 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:07.017474 master-0 kubenswrapper[7476]: I0320 08:46:07.017143 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:08.018034 master-0 kubenswrapper[7476]: I0320 08:46:08.017981 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:08.018034 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:08.018034 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:08.018034 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:08.018630 master-0 kubenswrapper[7476]: I0320 08:46:08.018051 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:09.017299 master-0 kubenswrapper[7476]: I0320 08:46:09.017183 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:09.017299 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:09.017299 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:09.017299 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:09.017299 master-0 kubenswrapper[7476]: I0320 08:46:09.017258 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:10.018255 master-0 kubenswrapper[7476]: I0320 08:46:10.018059 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:10.018255 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:10.018255 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:10.018255 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:10.018255 master-0 kubenswrapper[7476]: I0320 08:46:10.018172 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:11.018121 master-0 kubenswrapper[7476]: I0320 08:46:11.018020 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:11.018121 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:11.018121 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:11.018121 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:11.019316 master-0 kubenswrapper[7476]: I0320 08:46:11.018126 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:12.016980 master-0 kubenswrapper[7476]: I0320 08:46:12.016901 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:12.016980 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:12.016980 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:12.016980 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:12.017425 master-0 kubenswrapper[7476]: I0320 08:46:12.016994 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:12.142507 master-0 kubenswrapper[7476]: I0320 08:46:12.142434 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-897zl_14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/machine-approver-controller/0.log" Mar 20 08:46:12.143219 master-0 kubenswrapper[7476]: I0320 08:46:12.143110 7476 generic.go:334] "Generic (PLEG): container finished" podID="14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc" containerID="45ed4d229b5e4ffda3ad9ee3a6c6c79dd79e664c69394337cb1c4fc4b2036f31" exitCode=255 Mar 20 08:46:12.143219 master-0 kubenswrapper[7476]: I0320 08:46:12.143175 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerDied","Data":"45ed4d229b5e4ffda3ad9ee3a6c6c79dd79e664c69394337cb1c4fc4b2036f31"} Mar 20 08:46:12.144020 master-0 kubenswrapper[7476]: I0320 08:46:12.143986 7476 scope.go:117] "RemoveContainer" containerID="45ed4d229b5e4ffda3ad9ee3a6c6c79dd79e664c69394337cb1c4fc4b2036f31" Mar 20 08:46:12.238171 master-0 kubenswrapper[7476]: I0320 08:46:12.238054 7476 scope.go:117] "RemoveContainer" containerID="5997f3136ed0533c039dd0e30c51e5f693f57ff7a1981a4e954ad4ffb2ba2c02" Mar 20 08:46:12.239961 master-0 kubenswrapper[7476]: I0320 08:46:12.239898 7476 scope.go:117] "RemoveContainer" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" Mar 20 08:46:12.241515 master-0 kubenswrapper[7476]: E0320 08:46:12.241411 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:46:12.416701 master-0 kubenswrapper[7476]: E0320 08:46:12.416643 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:12.993820 master-0 kubenswrapper[7476]: I0320 08:46:12.993778 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:46:13.017570 master-0 kubenswrapper[7476]: I0320 08:46:13.017508 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:13.017570 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:13.017570 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:13.017570 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:13.018016 master-0 kubenswrapper[7476]: I0320 08:46:13.017588 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:13.161916 master-0 kubenswrapper[7476]: I0320 08:46:13.161853 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-897zl_14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/machine-approver-controller/0.log" Mar 20 08:46:13.162668 master-0 kubenswrapper[7476]: I0320 08:46:13.162611 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"ff9b4472f4baf6a0787735489d4002af06866f480fa96bcc3697cf3d30594373"} Mar 20 08:46:13.165475 master-0 kubenswrapper[7476]: I0320 08:46:13.165451 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/1.log" Mar 20 08:46:13.165601 master-0 kubenswrapper[7476]: I0320 08:46:13.165579 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6"} Mar 20 08:46:14.017109 master-0 kubenswrapper[7476]: I0320 08:46:14.016949 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:14.017109 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:14.017109 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:14.017109 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:14.017109 master-0 kubenswrapper[7476]: I0320 08:46:14.017052 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:14.832225 master-0 kubenswrapper[7476]: E0320 08:46:14.832130 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:46:15.016055 master-0 kubenswrapper[7476]: I0320 08:46:15.015952 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:15.016055 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:15.016055 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:15.016055 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:15.016055 master-0 kubenswrapper[7476]: I0320 08:46:15.016020 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:15.182416 master-0 kubenswrapper[7476]: I0320 08:46:15.182365 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3102a41a904a505c496c9e6ff056d38d7935cf53ed7153f14bbd8b5057d5541a"} Mar 20 08:46:15.182677 master-0 kubenswrapper[7476]: I0320 08:46:15.182640 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:46:15.182677 master-0 kubenswrapper[7476]: I0320 08:46:15.182670 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:46:16.018054 master-0 kubenswrapper[7476]: I0320 08:46:16.017973 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:16.018054 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:16.018054 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:16.018054 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:16.019412 master-0 kubenswrapper[7476]: I0320 08:46:16.018059 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:16.194436 master-0 kubenswrapper[7476]: I0320 08:46:16.194329 7476 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="3102a41a904a505c496c9e6ff056d38d7935cf53ed7153f14bbd8b5057d5541a" exitCode=0 Mar 20 08:46:16.194436 master-0 kubenswrapper[7476]: I0320 08:46:16.194385 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"3102a41a904a505c496c9e6ff056d38d7935cf53ed7153f14bbd8b5057d5541a"} Mar 20 08:46:17.016611 master-0 kubenswrapper[7476]: I0320 08:46:17.016531 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:17.016611 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:17.016611 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:17.016611 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:17.016874 master-0 kubenswrapper[7476]: I0320 08:46:17.016632 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:18.016429 master-0 kubenswrapper[7476]: I0320 08:46:18.016352 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:18.016429 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:18.016429 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:18.016429 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:18.017477 master-0 kubenswrapper[7476]: I0320 08:46:18.017435 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:18.177598 master-0 kubenswrapper[7476]: E0320 08:46:18.177489 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:46:18.214215 master-0 kubenswrapper[7476]: I0320 08:46:18.214178 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:46:18.214543 master-0 kubenswrapper[7476]: I0320 08:46:18.214517 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="d5209b2ed23676968405f84f9fdd80496d17987f9448303167bee1c204c5000c" exitCode=0 Mar 20 08:46:18.214636 master-0 kubenswrapper[7476]: I0320 08:46:18.214566 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerDied","Data":"d5209b2ed23676968405f84f9fdd80496d17987f9448303167bee1c204c5000c"} Mar 20 08:46:18.215343 master-0 kubenswrapper[7476]: I0320 08:46:18.215324 7476 scope.go:117] "RemoveContainer" containerID="d5209b2ed23676968405f84f9fdd80496d17987f9448303167bee1c204c5000c" Mar 20 08:46:18.237525 master-0 kubenswrapper[7476]: I0320 08:46:18.237433 7476 scope.go:117] "RemoveContainer" containerID="fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b" Mar 20 08:46:19.017459 master-0 kubenswrapper[7476]: I0320 08:46:19.017371 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:19.017459 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:19.017459 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:19.017459 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:19.018778 master-0 kubenswrapper[7476]: I0320 08:46:19.017465 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:19.225142 master-0 kubenswrapper[7476]: I0320 08:46:19.225091 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:46:19.225422 master-0 kubenswrapper[7476]: I0320 08:46:19.225260 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"15aaf83549a716dacd641da11d4b4c1513954bf4a644b2d6d983003f8312fe3f"} Mar 20 08:46:19.227646 master-0 kubenswrapper[7476]: I0320 08:46:19.227604 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/1.log" Mar 20 08:46:19.228143 master-0 kubenswrapper[7476]: I0320 08:46:19.228089 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"f0130280798962f1e4594514991e3d14785663897b5381945aa07cd2f793d6cf"} Mar 20 08:46:19.228345 master-0 kubenswrapper[7476]: I0320 08:46:19.228317 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:46:20.017473 master-0 kubenswrapper[7476]: I0320 08:46:20.017350 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:20.017473 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:20.017473 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:20.017473 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:20.017473 master-0 kubenswrapper[7476]: I0320 08:46:20.017441 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:21.017799 master-0 kubenswrapper[7476]: I0320 08:46:21.017708 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:21.017799 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:21.017799 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:21.017799 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:21.019045 master-0 kubenswrapper[7476]: I0320 08:46:21.017799 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:22.017920 master-0 kubenswrapper[7476]: I0320 08:46:22.017850 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:22.017920 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:22.017920 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:22.017920 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:22.019067 master-0 kubenswrapper[7476]: I0320 08:46:22.019020 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:22.256339 master-0 kubenswrapper[7476]: I0320 08:46:22.256254 7476 generic.go:334] "Generic (PLEG): container finished" podID="210dd7f0-d1c0-407a-b89b-f11ef605e5df" containerID="80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f" exitCode=0 Mar 20 08:46:22.256656 master-0 kubenswrapper[7476]: I0320 08:46:22.256350 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerDied","Data":"80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f"} Mar 20 08:46:22.256881 master-0 kubenswrapper[7476]: I0320 08:46:22.256836 7476 scope.go:117] "RemoveContainer" containerID="536065a4d8759d271003b36465db4bd4965a5a320e8baa9df238dec6c8adc25f" Mar 20 08:46:22.257762 master-0 kubenswrapper[7476]: I0320 08:46:22.257690 7476 scope.go:117] "RemoveContainer" containerID="80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f" Mar 20 08:46:22.258686 master-0 kubenswrapper[7476]: E0320 08:46:22.258169 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-57f769d897-crrdk_openshift-ovn-kubernetes(210dd7f0-d1c0-407a-b89b-f11ef605e5df)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" podUID="210dd7f0-d1c0-407a-b89b-f11ef605e5df" Mar 20 08:46:22.417536 master-0 kubenswrapper[7476]: E0320 08:46:22.416918 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:23.017858 master-0 kubenswrapper[7476]: I0320 08:46:23.017748 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:23.017858 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:23.017858 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:23.017858 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:23.017858 master-0 kubenswrapper[7476]: I0320 08:46:23.017838 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:23.491222 master-0 kubenswrapper[7476]: I0320 08:46:23.491106 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:46:23.779423 master-0 kubenswrapper[7476]: I0320 08:46:23.779244 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:46:23.779423 master-0 kubenswrapper[7476]: I0320 08:46:23.779366 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:46:24.017539 master-0 kubenswrapper[7476]: I0320 08:46:24.017431 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:24.017539 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:24.017539 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:24.017539 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:24.017944 master-0 kubenswrapper[7476]: I0320 08:46:24.017594 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:25.016853 master-0 kubenswrapper[7476]: I0320 08:46:25.016735 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:25.016853 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:25.016853 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:25.016853 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:25.016853 master-0 kubenswrapper[7476]: I0320 08:46:25.016843 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:25.236899 master-0 kubenswrapper[7476]: I0320 08:46:25.236803 7476 scope.go:117] "RemoveContainer" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" Mar 20 08:46:26.017802 master-0 kubenswrapper[7476]: I0320 08:46:26.017602 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:26.017802 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:26.017802 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:26.017802 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:26.017802 master-0 kubenswrapper[7476]: I0320 08:46:26.017724 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:26.286790 master-0 kubenswrapper[7476]: I0320 08:46:26.286627 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/1.log" Mar 20 08:46:26.287675 master-0 kubenswrapper[7476]: I0320 08:46:26.287631 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/0.log" Mar 20 08:46:26.287799 master-0 kubenswrapper[7476]: I0320 08:46:26.287675 7476 generic.go:334] "Generic (PLEG): container finished" podID="f202273a-b111-46ce-b404-7e481d2c7ff9" containerID="4dddcd2acd552341d271a22bee7c13f8c1ffb72ebb5cc9441b8049d3b6ece1f7" exitCode=1 Mar 20 08:46:26.287799 master-0 kubenswrapper[7476]: I0320 08:46:26.287730 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerDied","Data":"4dddcd2acd552341d271a22bee7c13f8c1ffb72ebb5cc9441b8049d3b6ece1f7"} Mar 20 08:46:26.287799 master-0 kubenswrapper[7476]: I0320 08:46:26.287764 7476 scope.go:117] "RemoveContainer" containerID="8c083804959a88c9c849b428e0b936db72af00ecf148631a285d481d8c54097f" Mar 20 08:46:26.290103 master-0 kubenswrapper[7476]: I0320 08:46:26.290058 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/3.log" Mar 20 08:46:26.291479 master-0 kubenswrapper[7476]: I0320 08:46:26.290498 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2"} Mar 20 08:46:26.291479 master-0 kubenswrapper[7476]: I0320 08:46:26.290934 7476 scope.go:117] "RemoveContainer" containerID="4dddcd2acd552341d271a22bee7c13f8c1ffb72ebb5cc9441b8049d3b6ece1f7" Mar 20 08:46:26.291479 master-0 kubenswrapper[7476]: E0320 08:46:26.291385 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-b25f2_openshift-machine-api(f202273a-b111-46ce-b404-7e481d2c7ff9)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" podUID="f202273a-b111-46ce-b404-7e481d2c7ff9" Mar 20 08:46:26.293777 master-0 kubenswrapper[7476]: I0320 08:46:26.293741 7476 generic.go:334] "Generic (PLEG): container finished" podID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" exitCode=0 Mar 20 08:46:26.293777 master-0 kubenswrapper[7476]: I0320 08:46:26.293782 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerDied","Data":"b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b"} Mar 20 08:46:26.294240 master-0 kubenswrapper[7476]: I0320 08:46:26.294209 7476 scope.go:117] "RemoveContainer" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" Mar 20 08:46:26.294474 master-0 kubenswrapper[7476]: E0320 08:46:26.294440 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-65b46449cf-9fccc_openshift-controller-manager(c200f016-3922-4e90-9061-92fd8c3fad2b)\"" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" Mar 20 08:46:26.314329 master-0 kubenswrapper[7476]: I0320 08:46:26.314303 7476 scope.go:117] "RemoveContainer" containerID="eacf5e052b386f63888a6a9a4f2ed8b8355f388306364efeef7926bdd5d16f5e" Mar 20 08:46:26.779625 master-0 kubenswrapper[7476]: I0320 08:46:26.779549 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:46:26.779908 master-0 kubenswrapper[7476]: I0320 08:46:26.779629 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:27.017416 master-0 kubenswrapper[7476]: I0320 08:46:27.017342 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:27.017416 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:27.017416 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:27.017416 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:27.018718 master-0 kubenswrapper[7476]: I0320 08:46:27.017429 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:27.307477 master-0 kubenswrapper[7476]: I0320 08:46:27.307408 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/1.log" Mar 20 08:46:28.015673 master-0 kubenswrapper[7476]: I0320 08:46:28.015596 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:28.015673 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:28.015673 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:28.015673 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:28.015673 master-0 kubenswrapper[7476]: I0320 08:46:28.015676 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:29.017340 master-0 kubenswrapper[7476]: I0320 08:46:29.017247 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:29.017340 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:29.017340 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:29.017340 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:29.018087 master-0 kubenswrapper[7476]: I0320 08:46:29.017336 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:30.016978 master-0 kubenswrapper[7476]: I0320 08:46:30.016786 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:30.016978 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:30.016978 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:30.016978 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:30.016978 master-0 kubenswrapper[7476]: I0320 08:46:30.016899 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:31.018420 master-0 kubenswrapper[7476]: I0320 08:46:31.018254 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:31.018420 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:31.018420 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:31.018420 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:31.018420 master-0 kubenswrapper[7476]: I0320 08:46:31.018403 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:31.569501 master-0 kubenswrapper[7476]: I0320 08:46:31.569413 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:46:31.569501 master-0 kubenswrapper[7476]: I0320 08:46:31.569491 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:46:31.570123 master-0 kubenswrapper[7476]: I0320 08:46:31.570076 7476 scope.go:117] "RemoveContainer" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" Mar 20 08:46:31.570529 master-0 kubenswrapper[7476]: E0320 08:46:31.570479 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller-manager pod=controller-manager-65b46449cf-9fccc_openshift-controller-manager(c200f016-3922-4e90-9061-92fd8c3fad2b)\"" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" Mar 20 08:46:32.017805 master-0 kubenswrapper[7476]: I0320 08:46:32.017685 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:32.017805 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:32.017805 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:32.017805 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:32.018344 master-0 kubenswrapper[7476]: I0320 08:46:32.017823 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:32.349549 master-0 kubenswrapper[7476]: I0320 08:46:32.349465 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/0.log" Mar 20 08:46:32.350382 master-0 kubenswrapper[7476]: I0320 08:46:32.349596 7476 generic.go:334] "Generic (PLEG): container finished" podID="a86af6a2-55a9-4c4e-8caf-1f51fedb23f5" containerID="3033684921b500c0cdc5a887bfabd7fc5e3c9f8cea2dfed120b0981d20756634" exitCode=1 Mar 20 08:46:32.350382 master-0 kubenswrapper[7476]: I0320 08:46:32.349642 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerDied","Data":"3033684921b500c0cdc5a887bfabd7fc5e3c9f8cea2dfed120b0981d20756634"} Mar 20 08:46:32.350536 master-0 kubenswrapper[7476]: I0320 08:46:32.350387 7476 scope.go:117] "RemoveContainer" containerID="3033684921b500c0cdc5a887bfabd7fc5e3c9f8cea2dfed120b0981d20756634" Mar 20 08:46:32.417629 master-0 kubenswrapper[7476]: E0320 08:46:32.417535 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:33.016954 master-0 kubenswrapper[7476]: I0320 08:46:33.016847 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:33.016954 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:33.016954 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:33.016954 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:33.016954 master-0 kubenswrapper[7476]: I0320 08:46:33.016937 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:33.361094 master-0 kubenswrapper[7476]: I0320 08:46:33.361050 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/0.log" Mar 20 08:46:33.362043 master-0 kubenswrapper[7476]: I0320 08:46:33.362002 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerStarted","Data":"72528580eee79b6c6db5f632d94bdc5cda1b2d88e86bcc96303c96a1539c51c9"} Mar 20 08:46:34.017516 master-0 kubenswrapper[7476]: I0320 08:46:34.017429 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:34.017516 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:34.017516 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:34.017516 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:34.017996 master-0 kubenswrapper[7476]: I0320 08:46:34.017531 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:34.237730 master-0 kubenswrapper[7476]: I0320 08:46:34.237646 7476 scope.go:117] "RemoveContainer" containerID="80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f" Mar 20 08:46:35.181101 master-0 kubenswrapper[7476]: E0320 08:46:35.181031 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="7s" Mar 20 08:46:35.771816 master-0 kubenswrapper[7476]: I0320 08:46:35.771732 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:35.771816 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:35.771816 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:35.771816 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:35.771816 master-0 kubenswrapper[7476]: I0320 08:46:35.771780 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:35.779229 master-0 kubenswrapper[7476]: I0320 08:46:35.779186 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"87bb88b58dcaa56043bab79cbae67bed022b306a8dc237363f63444aab0218d1"} Mar 20 08:46:36.016716 master-0 kubenswrapper[7476]: I0320 08:46:36.016627 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:36.016716 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:36.016716 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:36.016716 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:36.017170 master-0 kubenswrapper[7476]: I0320 08:46:36.016752 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:36.780606 master-0 kubenswrapper[7476]: I0320 08:46:36.780461 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:46:36.781432 master-0 kubenswrapper[7476]: I0320 08:46:36.780628 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:37.016434 master-0 kubenswrapper[7476]: I0320 08:46:37.016356 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:37.016434 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:37.016434 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:37.016434 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:37.016760 master-0 kubenswrapper[7476]: I0320 08:46:37.016440 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:38.016748 master-0 kubenswrapper[7476]: I0320 08:46:38.016656 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:38.016748 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:38.016748 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:38.016748 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:38.016748 master-0 kubenswrapper[7476]: I0320 08:46:38.016712 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:38.350298 master-0 kubenswrapper[7476]: E0320 08:46:38.350128 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189e7fe4d57f84e4 openshift-kube-controller-manager 9570 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:8c753d068f364b16e3aeb8396b7d9f33,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:38:44 +0000 UTC,LastTimestamp:2026-03-20 08:44:02.242406867 +0000 UTC m=+523.211175433,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:46:39.016535 master-0 kubenswrapper[7476]: I0320 08:46:39.016436 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:39.016535 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:39.016535 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:39.016535 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:39.017118 master-0 kubenswrapper[7476]: I0320 08:46:39.016532 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:40.018047 master-0 kubenswrapper[7476]: I0320 08:46:40.017848 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:40.018047 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:40.018047 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:40.018047 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:40.018047 master-0 kubenswrapper[7476]: I0320 08:46:40.017950 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:41.017232 master-0 kubenswrapper[7476]: I0320 08:46:41.017144 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:41.017232 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:41.017232 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:41.017232 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:41.017724 master-0 kubenswrapper[7476]: I0320 08:46:41.017231 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:41.237160 master-0 kubenswrapper[7476]: I0320 08:46:41.237067 7476 scope.go:117] "RemoveContainer" containerID="4dddcd2acd552341d271a22bee7c13f8c1ffb72ebb5cc9441b8049d3b6ece1f7" Mar 20 08:46:41.831851 master-0 kubenswrapper[7476]: I0320 08:46:41.831783 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/1.log" Mar 20 08:46:41.833015 master-0 kubenswrapper[7476]: I0320 08:46:41.832954 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3"} Mar 20 08:46:42.017097 master-0 kubenswrapper[7476]: I0320 08:46:42.017029 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:42.017097 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:42.017097 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:42.017097 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:42.017750 master-0 kubenswrapper[7476]: I0320 08:46:42.017702 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:42.237704 master-0 kubenswrapper[7476]: I0320 08:46:42.237610 7476 scope.go:117] "RemoveContainer" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" Mar 20 08:46:42.418654 master-0 kubenswrapper[7476]: E0320 08:46:42.418575 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:42.418654 master-0 kubenswrapper[7476]: E0320 08:46:42.418635 7476 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:46:42.845259 master-0 kubenswrapper[7476]: I0320 08:46:42.845178 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/2.log" Mar 20 08:46:42.846296 master-0 kubenswrapper[7476]: I0320 08:46:42.846220 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/1.log" Mar 20 08:46:42.846406 master-0 kubenswrapper[7476]: I0320 08:46:42.846365 7476 generic.go:334] "Generic (PLEG): container finished" podID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" containerID="da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6" exitCode=1 Mar 20 08:46:42.846552 master-0 kubenswrapper[7476]: I0320 08:46:42.846491 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerDied","Data":"da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6"} Mar 20 08:46:42.846657 master-0 kubenswrapper[7476]: I0320 08:46:42.846594 7476 scope.go:117] "RemoveContainer" containerID="5997f3136ed0533c039dd0e30c51e5f693f57ff7a1981a4e954ad4ffb2ba2c02" Mar 20 08:46:42.849435 master-0 kubenswrapper[7476]: I0320 08:46:42.849380 7476 scope.go:117] "RemoveContainer" containerID="da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6" Mar 20 08:46:42.850134 master-0 kubenswrapper[7476]: I0320 08:46:42.850073 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerStarted","Data":"7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e"} Mar 20 08:46:42.850247 master-0 kubenswrapper[7476]: E0320 08:46:42.850070 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:46:42.850685 master-0 kubenswrapper[7476]: I0320 08:46:42.850650 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:46:42.859431 master-0 kubenswrapper[7476]: I0320 08:46:42.859360 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:46:43.018198 master-0 kubenswrapper[7476]: I0320 08:46:43.018078 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:43.018198 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:43.018198 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:43.018198 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:43.018198 master-0 kubenswrapper[7476]: I0320 08:46:43.018167 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:43.860397 master-0 kubenswrapper[7476]: I0320 08:46:43.860334 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/2.log" Mar 20 08:46:44.017167 master-0 kubenswrapper[7476]: I0320 08:46:44.017108 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:44.017167 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:44.017167 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:44.017167 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:44.017167 master-0 kubenswrapper[7476]: I0320 08:46:44.017170 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:45.016287 master-0 kubenswrapper[7476]: I0320 08:46:45.016204 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:45.016287 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:45.016287 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:45.016287 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:45.016873 master-0 kubenswrapper[7476]: I0320 08:46:45.016367 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:46.017566 master-0 kubenswrapper[7476]: I0320 08:46:46.017453 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:46.017566 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:46.017566 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:46.017566 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:46.018534 master-0 kubenswrapper[7476]: I0320 08:46:46.017591 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:46.779432 master-0 kubenswrapper[7476]: I0320 08:46:46.779242 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:46:46.779432 master-0 kubenswrapper[7476]: I0320 08:46:46.779411 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:46.779906 master-0 kubenswrapper[7476]: I0320 08:46:46.779496 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:46:46.780736 master-0 kubenswrapper[7476]: I0320 08:46:46.780652 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"15aaf83549a716dacd641da11d4b4c1513954bf4a644b2d6d983003f8312fe3f"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 08:46:46.780926 master-0 kubenswrapper[7476]: I0320 08:46:46.780862 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" containerID="cri-o://15aaf83549a716dacd641da11d4b4c1513954bf4a644b2d6d983003f8312fe3f" gracePeriod=30 Mar 20 08:46:47.017327 master-0 kubenswrapper[7476]: I0320 08:46:47.017056 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:46:47.017327 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:46:47.017327 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:46:47.017327 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:46:47.017327 master-0 kubenswrapper[7476]: I0320 08:46:47.017114 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:46:47.017327 master-0 kubenswrapper[7476]: I0320 08:46:47.017163 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:46:47.019005 master-0 kubenswrapper[7476]: I0320 08:46:47.017880 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe"} pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" containerMessage="Container router failed startup probe, will be restarted" Mar 20 08:46:47.019005 master-0 kubenswrapper[7476]: I0320 08:46:47.017931 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" containerID="cri-o://11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe" gracePeriod=3600 Mar 20 08:46:47.894928 master-0 kubenswrapper[7476]: I0320 08:46:47.894828 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/1.log" Mar 20 08:46:47.897827 master-0 kubenswrapper[7476]: I0320 08:46:47.897766 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:46:47.897992 master-0 kubenswrapper[7476]: I0320 08:46:47.897835 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="15aaf83549a716dacd641da11d4b4c1513954bf4a644b2d6d983003f8312fe3f" exitCode=255 Mar 20 08:46:47.897992 master-0 kubenswrapper[7476]: I0320 08:46:47.897877 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerDied","Data":"15aaf83549a716dacd641da11d4b4c1513954bf4a644b2d6d983003f8312fe3f"} Mar 20 08:46:47.897992 master-0 kubenswrapper[7476]: I0320 08:46:47.897920 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"a3a22217124e33f63e707c3ba23be13753cff8613bac1adf6bffb3189d19c06c"} Mar 20 08:46:47.897992 master-0 kubenswrapper[7476]: I0320 08:46:47.897948 7476 scope.go:117] "RemoveContainer" containerID="d5209b2ed23676968405f84f9fdd80496d17987f9448303167bee1c204c5000c" Mar 20 08:46:48.920989 master-0 kubenswrapper[7476]: I0320 08:46:48.920896 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/1.log" Mar 20 08:46:48.922705 master-0 kubenswrapper[7476]: I0320 08:46:48.922654 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:46:49.184838 master-0 kubenswrapper[7476]: E0320 08:46:49.184682 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:46:49.248751 master-0 kubenswrapper[7476]: I0320 08:46:49.248649 7476 status_manager.go:851] "Failed to get status for pod" podUID="8c753d068f364b16e3aeb8396b7d9f33" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 20 08:46:49.930167 master-0 kubenswrapper[7476]: I0320 08:46:49.929987 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:46:49.930167 master-0 kubenswrapper[7476]: I0320 08:46:49.930033 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:46:52.182296 master-0 kubenswrapper[7476]: E0320 08:46:52.182103 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:46:53.779735 master-0 kubenswrapper[7476]: I0320 08:46:53.779648 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:46:53.779735 master-0 kubenswrapper[7476]: I0320 08:46:53.779712 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:46:56.780484 master-0 kubenswrapper[7476]: I0320 08:46:56.780422 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:46:56.781067 master-0 kubenswrapper[7476]: I0320 08:46:56.780504 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:46:57.236924 master-0 kubenswrapper[7476]: I0320 08:46:57.236825 7476 scope.go:117] "RemoveContainer" containerID="da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6" Mar 20 08:46:57.237347 master-0 kubenswrapper[7476]: E0320 08:46:57.237244 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:47:02.678697 master-0 kubenswrapper[7476]: E0320 08:47:02.678597 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:46:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:46:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:46:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:46:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:06.780423 master-0 kubenswrapper[7476]: I0320 08:47:06.780317 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:47:06.781313 master-0 kubenswrapper[7476]: I0320 08:47:06.780422 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:09.184216 master-0 kubenswrapper[7476]: E0320 08:47:09.184030 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:47:11.237112 master-0 kubenswrapper[7476]: I0320 08:47:11.237046 7476 scope.go:117] "RemoveContainer" containerID="da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6" Mar 20 08:47:12.123363 master-0 kubenswrapper[7476]: I0320 08:47:12.123190 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/2.log" Mar 20 08:47:12.123363 master-0 kubenswrapper[7476]: I0320 08:47:12.123327 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e"} Mar 20 08:47:12.353225 master-0 kubenswrapper[7476]: E0320 08:47:12.353052 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e802f8edfd12e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:BackOff,Message:Back-off restarting failed container kube-scheduler in pod bootstrap-kube-scheduler-master-0_kube-system(c83737980b9ee109184b1d78e942cf36),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:44:04.979405102 +0000 UTC m=+525.948173638,LastTimestamp:2026-03-20 08:44:04.979405102 +0000 UTC m=+525.948173638,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:47:12.679774 master-0 kubenswrapper[7476]: E0320 08:47:12.679635 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:16.780320 master-0 kubenswrapper[7476]: I0320 08:47:16.780145 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:47:16.781253 master-0 kubenswrapper[7476]: I0320 08:47:16.780299 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:16.781253 master-0 kubenswrapper[7476]: I0320 08:47:16.780416 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:47:16.781493 master-0 kubenswrapper[7476]: I0320 08:47:16.781468 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a3a22217124e33f63e707c3ba23be13753cff8613bac1adf6bffb3189d19c06c"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 08:47:16.781677 master-0 kubenswrapper[7476]: I0320 08:47:16.781627 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" containerID="cri-o://a3a22217124e33f63e707c3ba23be13753cff8613bac1adf6bffb3189d19c06c" gracePeriod=30 Mar 20 08:47:17.165705 master-0 kubenswrapper[7476]: I0320 08:47:17.165000 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/2.log" Mar 20 08:47:17.165832 master-0 kubenswrapper[7476]: I0320 08:47:17.165701 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/1.log" Mar 20 08:47:17.167673 master-0 kubenswrapper[7476]: I0320 08:47:17.167626 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:47:17.167784 master-0 kubenswrapper[7476]: I0320 08:47:17.167691 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="a3a22217124e33f63e707c3ba23be13753cff8613bac1adf6bffb3189d19c06c" exitCode=255 Mar 20 08:47:17.167784 master-0 kubenswrapper[7476]: I0320 08:47:17.167734 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerDied","Data":"a3a22217124e33f63e707c3ba23be13753cff8613bac1adf6bffb3189d19c06c"} Mar 20 08:47:17.167784 master-0 kubenswrapper[7476]: I0320 08:47:17.167779 7476 scope.go:117] "RemoveContainer" containerID="15aaf83549a716dacd641da11d4b4c1513954bf4a644b2d6d983003f8312fe3f" Mar 20 08:47:18.179047 master-0 kubenswrapper[7476]: I0320 08:47:18.178973 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/2.log" Mar 20 08:47:18.181497 master-0 kubenswrapper[7476]: I0320 08:47:18.181454 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:47:18.181564 master-0 kubenswrapper[7476]: I0320 08:47:18.181530 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} Mar 20 08:47:22.680544 master-0 kubenswrapper[7476]: E0320 08:47:22.680228 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:23.778786 master-0 kubenswrapper[7476]: I0320 08:47:23.778675 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:47:23.778786 master-0 kubenswrapper[7476]: I0320 08:47:23.778752 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:47:23.933818 master-0 kubenswrapper[7476]: E0320 08:47:23.933731 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:47:25.248915 master-0 kubenswrapper[7476]: I0320 08:47:25.248864 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"833277a9c6e114a77d1b2ffcc162732dd849bab5619d63dd4b6f773ac9bb547e"} Mar 20 08:47:25.251947 master-0 kubenswrapper[7476]: I0320 08:47:25.249438 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"33ee802a48139e3bfd946165cfee9b10245c4e17272752d17f0aadad7163bcb8"} Mar 20 08:47:26.186410 master-0 kubenswrapper[7476]: E0320 08:47:26.186338 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:47:26.258706 master-0 kubenswrapper[7476]: I0320 08:47:26.258596 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"a2952a1f800d26a7eabecb79481362c19e0a6b58bcf79619acbcd04fc4857ada"} Mar 20 08:47:26.258706 master-0 kubenswrapper[7476]: I0320 08:47:26.258715 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"e5ad9bac3cbb02a4a670b1d82260116acc5d8d83eb8a7b2d3edcb355e30555a7"} Mar 20 08:47:26.779227 master-0 kubenswrapper[7476]: I0320 08:47:26.779059 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:47:26.779227 master-0 kubenswrapper[7476]: I0320 08:47:26.779151 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:27.272411 master-0 kubenswrapper[7476]: I0320 08:47:27.272301 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"222b6476c5a428c92fd2ca0d4be351ebb99b0254111d7d351670afebd811ebce"} Mar 20 08:47:27.273492 master-0 kubenswrapper[7476]: I0320 08:47:27.272777 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:47:27.273492 master-0 kubenswrapper[7476]: I0320 08:47:27.272831 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:47:29.260562 master-0 kubenswrapper[7476]: I0320 08:47:29.260487 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 20 08:47:32.681329 master-0 kubenswrapper[7476]: E0320 08:47:32.681233 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:33.330412 master-0 kubenswrapper[7476]: I0320 08:47:33.330238 7476 generic.go:334] "Generic (PLEG): container finished" podID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerID="11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe" exitCode=0 Mar 20 08:47:33.330412 master-0 kubenswrapper[7476]: I0320 08:47:33.330366 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerDied","Data":"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe"} Mar 20 08:47:33.330679 master-0 kubenswrapper[7476]: I0320 08:47:33.330473 7476 scope.go:117] "RemoveContainer" containerID="f6fca13f29777c3e581624d7e050cfe207017354f2d3e38a35c450e9f709ea25" Mar 20 08:47:34.260651 master-0 kubenswrapper[7476]: I0320 08:47:34.260581 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 20 08:47:34.300829 master-0 kubenswrapper[7476]: I0320 08:47:34.300727 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 20 08:47:34.343292 master-0 kubenswrapper[7476]: I0320 08:47:34.343190 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c"} Mar 20 08:47:35.014700 master-0 kubenswrapper[7476]: I0320 08:47:35.014603 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:47:35.018832 master-0 kubenswrapper[7476]: I0320 08:47:35.018747 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:35.018832 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:35.018832 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:35.018832 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:35.019401 master-0 kubenswrapper[7476]: I0320 08:47:35.018830 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:36.017224 master-0 kubenswrapper[7476]: I0320 08:47:36.017168 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:36.017224 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:36.017224 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:36.017224 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:36.017908 master-0 kubenswrapper[7476]: I0320 08:47:36.017227 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:36.779791 master-0 kubenswrapper[7476]: I0320 08:47:36.779700 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:47:36.780020 master-0 kubenswrapper[7476]: I0320 08:47:36.779800 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:37.016784 master-0 kubenswrapper[7476]: I0320 08:47:37.016682 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:37.016784 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:37.016784 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:37.016784 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:37.016784 master-0 kubenswrapper[7476]: I0320 08:47:37.016778 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:38.015911 master-0 kubenswrapper[7476]: I0320 08:47:38.015839 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:38.015911 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:38.015911 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:38.015911 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:38.016610 master-0 kubenswrapper[7476]: I0320 08:47:38.015929 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:39.016202 master-0 kubenswrapper[7476]: I0320 08:47:39.016129 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:39.016202 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:39.016202 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:39.016202 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:39.016202 master-0 kubenswrapper[7476]: I0320 08:47:39.016203 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:39.284550 master-0 kubenswrapper[7476]: I0320 08:47:39.284378 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 20 08:47:40.016816 master-0 kubenswrapper[7476]: I0320 08:47:40.016670 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:40.016816 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:40.016816 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:40.016816 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:40.016816 master-0 kubenswrapper[7476]: I0320 08:47:40.016786 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:41.018616 master-0 kubenswrapper[7476]: I0320 08:47:41.018495 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:41.018616 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:41.018616 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:41.018616 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:41.019856 master-0 kubenswrapper[7476]: I0320 08:47:41.018607 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:41.574496 master-0 kubenswrapper[7476]: I0320 08:47:41.574424 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/2.log" Mar 20 08:47:41.575357 master-0 kubenswrapper[7476]: I0320 08:47:41.575312 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/1.log" Mar 20 08:47:41.575889 master-0 kubenswrapper[7476]: I0320 08:47:41.575832 7476 generic.go:334] "Generic (PLEG): container finished" podID="f202273a-b111-46ce-b404-7e481d2c7ff9" containerID="052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3" exitCode=1 Mar 20 08:47:41.575993 master-0 kubenswrapper[7476]: I0320 08:47:41.575947 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerDied","Data":"052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3"} Mar 20 08:47:41.576042 master-0 kubenswrapper[7476]: I0320 08:47:41.576020 7476 scope.go:117] "RemoveContainer" containerID="4dddcd2acd552341d271a22bee7c13f8c1ffb72ebb5cc9441b8049d3b6ece1f7" Mar 20 08:47:41.577092 master-0 kubenswrapper[7476]: I0320 08:47:41.577032 7476 scope.go:117] "RemoveContainer" containerID="052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3" Mar 20 08:47:41.577694 master-0 kubenswrapper[7476]: E0320 08:47:41.577629 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-b25f2_openshift-machine-api(f202273a-b111-46ce-b404-7e481d2c7ff9)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" podUID="f202273a-b111-46ce-b404-7e481d2c7ff9" Mar 20 08:47:41.579539 master-0 kubenswrapper[7476]: I0320 08:47:41.579497 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/3.log" Mar 20 08:47:41.580561 master-0 kubenswrapper[7476]: I0320 08:47:41.580510 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/2.log" Mar 20 08:47:41.580640 master-0 kubenswrapper[7476]: I0320 08:47:41.580605 7476 generic.go:334] "Generic (PLEG): container finished" podID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" exitCode=1 Mar 20 08:47:41.580685 master-0 kubenswrapper[7476]: I0320 08:47:41.580663 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerDied","Data":"43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e"} Mar 20 08:47:41.581383 master-0 kubenswrapper[7476]: I0320 08:47:41.581314 7476 scope.go:117] "RemoveContainer" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" Mar 20 08:47:41.581766 master-0 kubenswrapper[7476]: E0320 08:47:41.581718 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:47:41.611129 master-0 kubenswrapper[7476]: I0320 08:47:41.611029 7476 scope.go:117] "RemoveContainer" containerID="da500517c96da6ce0f7b1a3855e1fdc43426690142a8613d46966e24e78f7dd6" Mar 20 08:47:42.014110 master-0 kubenswrapper[7476]: I0320 08:47:42.013999 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:47:42.017063 master-0 kubenswrapper[7476]: I0320 08:47:42.017013 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:42.017063 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:42.017063 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:42.017063 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:42.018102 master-0 kubenswrapper[7476]: I0320 08:47:42.017087 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:42.592906 master-0 kubenswrapper[7476]: I0320 08:47:42.592843 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/2.log" Mar 20 08:47:42.595706 master-0 kubenswrapper[7476]: I0320 08:47:42.595672 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/3.log" Mar 20 08:47:42.681784 master-0 kubenswrapper[7476]: E0320 08:47:42.681632 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:42.681784 master-0 kubenswrapper[7476]: E0320 08:47:42.681722 7476 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:47:43.017622 master-0 kubenswrapper[7476]: I0320 08:47:43.017489 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:43.017622 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:43.017622 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:43.017622 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:43.017622 master-0 kubenswrapper[7476]: I0320 08:47:43.017554 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:43.188738 master-0 kubenswrapper[7476]: E0320 08:47:43.188503 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:47:44.016766 master-0 kubenswrapper[7476]: I0320 08:47:44.016582 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:44.016766 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:44.016766 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:44.016766 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:44.016766 master-0 kubenswrapper[7476]: I0320 08:47:44.016712 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:45.017020 master-0 kubenswrapper[7476]: I0320 08:47:45.016961 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:45.017020 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:45.017020 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:45.017020 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:45.017772 master-0 kubenswrapper[7476]: I0320 08:47:45.017038 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:46.016573 master-0 kubenswrapper[7476]: I0320 08:47:46.016496 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:46.016573 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:46.016573 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:46.016573 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:46.017045 master-0 kubenswrapper[7476]: I0320 08:47:46.016566 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:46.356881 master-0 kubenswrapper[7476]: E0320 08:47:46.356665 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e7fc972e402bf kube-system 9040 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b23c544d3894e5b31f66a18c554f03b0d29f92c2000c46b57b1c96da7ec25db9\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:36:46 +0000 UTC,LastTimestamp:2026-03-20 08:44:16.238369861 +0000 UTC m=+537.207138397,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:47:46.779973 master-0 kubenswrapper[7476]: I0320 08:47:46.779699 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:47:46.779973 master-0 kubenswrapper[7476]: I0320 08:47:46.779840 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:47:46.779973 master-0 kubenswrapper[7476]: I0320 08:47:46.779939 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:47:46.780944 master-0 kubenswrapper[7476]: I0320 08:47:46.780892 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 08:47:46.781086 master-0 kubenswrapper[7476]: I0320 08:47:46.780999 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" containerID="cri-o://ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" gracePeriod=30 Mar 20 08:47:46.899346 master-0 kubenswrapper[7476]: E0320 08:47:46.899229 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(8c753d068f364b16e3aeb8396b7d9f33)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" Mar 20 08:47:47.016698 master-0 kubenswrapper[7476]: I0320 08:47:47.016633 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:47.016698 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:47.016698 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:47.016698 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:47.017614 master-0 kubenswrapper[7476]: I0320 08:47:47.016726 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:47.639224 master-0 kubenswrapper[7476]: I0320 08:47:47.639143 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:47:47.639955 master-0 kubenswrapper[7476]: I0320 08:47:47.639898 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/2.log" Mar 20 08:47:47.641809 master-0 kubenswrapper[7476]: I0320 08:47:47.641740 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:47:47.641967 master-0 kubenswrapper[7476]: I0320 08:47:47.641833 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" exitCode=255 Mar 20 08:47:47.641967 master-0 kubenswrapper[7476]: I0320 08:47:47.641887 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerDied","Data":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} Mar 20 08:47:47.642116 master-0 kubenswrapper[7476]: I0320 08:47:47.641966 7476 scope.go:117] "RemoveContainer" containerID="a3a22217124e33f63e707c3ba23be13753cff8613bac1adf6bffb3189d19c06c" Mar 20 08:47:47.642593 master-0 kubenswrapper[7476]: I0320 08:47:47.642551 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:47:47.642855 master-0 kubenswrapper[7476]: E0320 08:47:47.642791 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(8c753d068f364b16e3aeb8396b7d9f33)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" Mar 20 08:47:48.017724 master-0 kubenswrapper[7476]: I0320 08:47:48.017538 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:48.017724 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:48.017724 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:48.017724 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:48.017724 master-0 kubenswrapper[7476]: I0320 08:47:48.017648 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:48.652541 master-0 kubenswrapper[7476]: I0320 08:47:48.652435 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:47:48.656395 master-0 kubenswrapper[7476]: I0320 08:47:48.655623 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:47:49.017334 master-0 kubenswrapper[7476]: I0320 08:47:49.017092 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:49.017334 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:49.017334 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:49.017334 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:49.017334 master-0 kubenswrapper[7476]: I0320 08:47:49.017214 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:49.250461 master-0 kubenswrapper[7476]: I0320 08:47:49.250357 7476 status_manager.go:851] "Failed to get status for pod" podUID="c83737980b9ee109184b1d78e942cf36" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Mar 20 08:47:50.017657 master-0 kubenswrapper[7476]: I0320 08:47:50.017500 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:50.017657 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:50.017657 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:50.017657 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:50.017657 master-0 kubenswrapper[7476]: I0320 08:47:50.017573 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:51.016923 master-0 kubenswrapper[7476]: I0320 08:47:51.016852 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:51.016923 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:51.016923 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:51.016923 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:51.016923 master-0 kubenswrapper[7476]: I0320 08:47:51.016950 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:52.017427 master-0 kubenswrapper[7476]: I0320 08:47:52.017339 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:52.017427 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:52.017427 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:52.017427 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:52.018673 master-0 kubenswrapper[7476]: I0320 08:47:52.017437 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:52.237109 master-0 kubenswrapper[7476]: I0320 08:47:52.237026 7476 scope.go:117] "RemoveContainer" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" Mar 20 08:47:52.237447 master-0 kubenswrapper[7476]: E0320 08:47:52.237404 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:47:53.017621 master-0 kubenswrapper[7476]: I0320 08:47:53.017543 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:53.017621 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:53.017621 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:53.017621 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:53.018241 master-0 kubenswrapper[7476]: I0320 08:47:53.017679 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:53.778593 master-0 kubenswrapper[7476]: I0320 08:47:53.778486 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:47:53.780008 master-0 kubenswrapper[7476]: I0320 08:47:53.779950 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:47:53.780490 master-0 kubenswrapper[7476]: E0320 08:47:53.780433 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(8c753d068f364b16e3aeb8396b7d9f33)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" Mar 20 08:47:54.017895 master-0 kubenswrapper[7476]: I0320 08:47:54.017774 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:54.017895 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:54.017895 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:54.017895 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:54.017895 master-0 kubenswrapper[7476]: I0320 08:47:54.017874 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:55.018332 master-0 kubenswrapper[7476]: I0320 08:47:55.018223 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:55.018332 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:55.018332 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:55.018332 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:55.019738 master-0 kubenswrapper[7476]: I0320 08:47:55.019686 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:56.016773 master-0 kubenswrapper[7476]: I0320 08:47:56.016716 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:56.016773 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:56.016773 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:56.016773 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:56.017104 master-0 kubenswrapper[7476]: I0320 08:47:56.016788 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:56.237173 master-0 kubenswrapper[7476]: I0320 08:47:56.237077 7476 scope.go:117] "RemoveContainer" containerID="052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3" Mar 20 08:47:56.238015 master-0 kubenswrapper[7476]: E0320 08:47:56.237538 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-6f69995874-b25f2_openshift-machine-api(f202273a-b111-46ce-b404-7e481d2c7ff9)\"" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" podUID="f202273a-b111-46ce-b404-7e481d2c7ff9" Mar 20 08:47:57.017708 master-0 kubenswrapper[7476]: I0320 08:47:57.017652 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:57.017708 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:57.017708 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:57.017708 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:57.018007 master-0 kubenswrapper[7476]: I0320 08:47:57.017724 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:58.016324 master-0 kubenswrapper[7476]: I0320 08:47:58.016231 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:58.016324 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:58.016324 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:58.016324 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:58.017342 master-0 kubenswrapper[7476]: I0320 08:47:58.016351 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:47:59.016390 master-0 kubenswrapper[7476]: I0320 08:47:59.016287 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:47:59.016390 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:47:59.016390 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:47:59.016390 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:47:59.016390 master-0 kubenswrapper[7476]: I0320 08:47:59.016372 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:00.018035 master-0 kubenswrapper[7476]: I0320 08:48:00.017885 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:00.018035 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:00.018035 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:00.018035 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:00.018035 master-0 kubenswrapper[7476]: I0320 08:48:00.017965 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:00.191095 master-0 kubenswrapper[7476]: E0320 08:48:00.191028 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:48:01.017819 master-0 kubenswrapper[7476]: I0320 08:48:01.017736 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:01.017819 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:01.017819 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:01.017819 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:01.018872 master-0 kubenswrapper[7476]: I0320 08:48:01.017827 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:01.276640 master-0 kubenswrapper[7476]: E0320 08:48:01.276455 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:48:01.750467 master-0 kubenswrapper[7476]: I0320 08:48:01.750389 7476 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:48:01.750467 master-0 kubenswrapper[7476]: I0320 08:48:01.750436 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:48:02.017895 master-0 kubenswrapper[7476]: I0320 08:48:02.017724 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:02.017895 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:02.017895 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:02.017895 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:02.017895 master-0 kubenswrapper[7476]: I0320 08:48:02.017804 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:02.750246 master-0 kubenswrapper[7476]: E0320 08:48:02.750169 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:47:52Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:47:52Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:47:52Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:47:52Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": context deadline exceeded" Mar 20 08:48:03.016785 master-0 kubenswrapper[7476]: I0320 08:48:03.016583 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:03.016785 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:03.016785 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:03.016785 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:03.016785 master-0 kubenswrapper[7476]: I0320 08:48:03.016672 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:04.016722 master-0 kubenswrapper[7476]: I0320 08:48:04.016632 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:04.016722 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:04.016722 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:04.016722 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:04.017997 master-0 kubenswrapper[7476]: I0320 08:48:04.016746 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:04.237160 master-0 kubenswrapper[7476]: I0320 08:48:04.237082 7476 scope.go:117] "RemoveContainer" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" Mar 20 08:48:04.237723 master-0 kubenswrapper[7476]: E0320 08:48:04.237663 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:48:05.017581 master-0 kubenswrapper[7476]: I0320 08:48:05.017518 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:05.017581 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:05.017581 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:05.017581 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:05.018860 master-0 kubenswrapper[7476]: I0320 08:48:05.018797 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:06.017233 master-0 kubenswrapper[7476]: I0320 08:48:06.017161 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:06.017233 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:06.017233 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:06.017233 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:06.017233 master-0 kubenswrapper[7476]: I0320 08:48:06.017229 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:07.017061 master-0 kubenswrapper[7476]: I0320 08:48:07.016986 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:07.017061 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:07.017061 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:07.017061 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:07.017629 master-0 kubenswrapper[7476]: I0320 08:48:07.017072 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:07.237372 master-0 kubenswrapper[7476]: I0320 08:48:07.237246 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:48:07.238142 master-0 kubenswrapper[7476]: E0320 08:48:07.237804 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(8c753d068f364b16e3aeb8396b7d9f33)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" Mar 20 08:48:08.016493 master-0 kubenswrapper[7476]: I0320 08:48:08.016420 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:08.016493 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:08.016493 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:08.016493 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:08.016493 master-0 kubenswrapper[7476]: I0320 08:48:08.016486 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:09.017212 master-0 kubenswrapper[7476]: I0320 08:48:09.017121 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:09.017212 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:09.017212 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:09.017212 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:09.018313 master-0 kubenswrapper[7476]: I0320 08:48:09.017235 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:09.237096 master-0 kubenswrapper[7476]: I0320 08:48:09.237005 7476 scope.go:117] "RemoveContainer" containerID="052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3" Mar 20 08:48:09.820459 master-0 kubenswrapper[7476]: I0320 08:48:09.820388 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/2.log" Mar 20 08:48:09.820964 master-0 kubenswrapper[7476]: I0320 08:48:09.820842 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"6abdfc219807d34bb658ce6361b01fcfed8f3da0de196ca2575990ea57791b92"} Mar 20 08:48:10.018020 master-0 kubenswrapper[7476]: I0320 08:48:10.017842 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:10.018020 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:10.018020 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:10.018020 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:10.018020 master-0 kubenswrapper[7476]: I0320 08:48:10.017938 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:11.017095 master-0 kubenswrapper[7476]: I0320 08:48:11.016992 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:11.017095 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:11.017095 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:11.017095 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:11.017095 master-0 kubenswrapper[7476]: I0320 08:48:11.017046 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:12.017017 master-0 kubenswrapper[7476]: I0320 08:48:12.016927 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:12.017017 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:12.017017 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:12.017017 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:12.018458 master-0 kubenswrapper[7476]: I0320 08:48:12.017016 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:12.750922 master-0 kubenswrapper[7476]: E0320 08:48:12.750803 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:48:13.017818 master-0 kubenswrapper[7476]: I0320 08:48:13.017458 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:13.017818 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:13.017818 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:13.017818 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:13.017818 master-0 kubenswrapper[7476]: I0320 08:48:13.017545 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:14.016929 master-0 kubenswrapper[7476]: I0320 08:48:14.016854 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:14.016929 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:14.016929 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:14.016929 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:14.017530 master-0 kubenswrapper[7476]: I0320 08:48:14.017483 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:15.016847 master-0 kubenswrapper[7476]: I0320 08:48:15.016731 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:15.016847 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:15.016847 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:15.016847 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:15.016847 master-0 kubenswrapper[7476]: I0320 08:48:15.016808 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:16.016766 master-0 kubenswrapper[7476]: I0320 08:48:16.016682 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:16.016766 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:16.016766 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:16.016766 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:16.017712 master-0 kubenswrapper[7476]: I0320 08:48:16.016772 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:16.237426 master-0 kubenswrapper[7476]: I0320 08:48:16.237351 7476 scope.go:117] "RemoveContainer" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" Mar 20 08:48:16.237712 master-0 kubenswrapper[7476]: E0320 08:48:16.237673 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-64854d9cff-gng67_openshift-cluster-storage-operator(a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" podUID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" Mar 20 08:48:17.017655 master-0 kubenswrapper[7476]: I0320 08:48:17.017545 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:17.017655 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:17.017655 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:17.017655 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:17.017655 master-0 kubenswrapper[7476]: I0320 08:48:17.017636 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:17.193006 master-0 kubenswrapper[7476]: E0320 08:48:17.192895 7476 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 20 08:48:18.017959 master-0 kubenswrapper[7476]: I0320 08:48:18.017813 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:18.017959 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:18.017959 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:18.017959 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:18.017959 master-0 kubenswrapper[7476]: I0320 08:48:18.017924 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:19.017109 master-0 kubenswrapper[7476]: I0320 08:48:19.017014 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:19.017109 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:19.017109 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:19.017109 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:19.017646 master-0 kubenswrapper[7476]: I0320 08:48:19.017115 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:20.017314 master-0 kubenswrapper[7476]: I0320 08:48:20.017143 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:20.017314 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:20.017314 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:20.017314 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:20.017314 master-0 kubenswrapper[7476]: I0320 08:48:20.017239 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:20.237015 master-0 kubenswrapper[7476]: I0320 08:48:20.236913 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:48:20.237407 master-0 kubenswrapper[7476]: E0320 08:48:20.237247 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(8c753d068f364b16e3aeb8396b7d9f33)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" Mar 20 08:48:20.361321 master-0 kubenswrapper[7476]: E0320 08:48:20.360472 7476 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189e7fc988779e93 kube-system 9049 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:c83737980b9ee109184b1d78e942cf36,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:36:46 +0000 UTC,LastTimestamp:2026-03-20 08:44:16.61475036 +0000 UTC m=+537.583518896,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:48:21.016447 master-0 kubenswrapper[7476]: I0320 08:48:21.016346 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:21.016447 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:21.016447 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:21.016447 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:21.016447 master-0 kubenswrapper[7476]: I0320 08:48:21.016423 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:21.910426 master-0 kubenswrapper[7476]: I0320 08:48:21.908848 7476 generic.go:334] "Generic (PLEG): container finished" podID="71ca96e8-5108-455c-bb3c-17977d38e912" containerID="f61b725a79fff556468b0126e41778d167b8a31ec8526a9c664ab434b3c33c45" exitCode=0 Mar 20 08:48:21.910426 master-0 kubenswrapper[7476]: I0320 08:48:21.908919 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerDied","Data":"f61b725a79fff556468b0126e41778d167b8a31ec8526a9c664ab434b3c33c45"} Mar 20 08:48:21.910426 master-0 kubenswrapper[7476]: I0320 08:48:21.908979 7476 scope.go:117] "RemoveContainer" containerID="546f50582d27b9704d91a180b620a54d25d194d6d958c834e126f15276d2a186" Mar 20 08:48:21.910426 master-0 kubenswrapper[7476]: I0320 08:48:21.909615 7476 scope.go:117] "RemoveContainer" containerID="f61b725a79fff556468b0126e41778d167b8a31ec8526a9c664ab434b3c33c45" Mar 20 08:48:21.922581 master-0 kubenswrapper[7476]: I0320 08:48:21.921712 7476 generic.go:334] "Generic (PLEG): container finished" podID="acbaba45-12d9-40b9-818c-4b091d7929b1" containerID="6a9d899a8eb10974cc6c4342f48d72d6fc952b94defbc645eeeae9b0a3d84f6a" exitCode=0 Mar 20 08:48:21.922732 master-0 kubenswrapper[7476]: I0320 08:48:21.922536 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerDied","Data":"6a9d899a8eb10974cc6c4342f48d72d6fc952b94defbc645eeeae9b0a3d84f6a"} Mar 20 08:48:21.923220 master-0 kubenswrapper[7476]: I0320 08:48:21.923203 7476 scope.go:117] "RemoveContainer" containerID="6a9d899a8eb10974cc6c4342f48d72d6fc952b94defbc645eeeae9b0a3d84f6a" Mar 20 08:48:22.017954 master-0 kubenswrapper[7476]: I0320 08:48:22.016231 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:22.017954 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:22.017954 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:22.017954 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:22.017954 master-0 kubenswrapper[7476]: I0320 08:48:22.016299 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:22.183176 master-0 kubenswrapper[7476]: I0320 08:48:22.183124 7476 patch_prober.go:28] interesting pod/authentication-operator-5885bfd7f4-tdpfq container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" start-of-body= Mar 20 08:48:22.183305 master-0 kubenswrapper[7476]: I0320 08:48:22.183184 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" podUID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.11:8443/healthz\": dial tcp 10.128.0.11:8443: connect: connection refused" Mar 20 08:48:22.411395 master-0 kubenswrapper[7476]: I0320 08:48:22.411310 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:48:22.411791 master-0 kubenswrapper[7476]: I0320 08:48:22.411414 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:48:22.751236 master-0 kubenswrapper[7476]: E0320 08:48:22.751090 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:48:22.932222 master-0 kubenswrapper[7476]: I0320 08:48:22.932151 7476 generic.go:334] "Generic (PLEG): container finished" podID="1746482a-d1a3-4eac-8bc9-643b6af75163" containerID="a1b4eceb0f2328786d0d5d45adc257b068090b4a532ca9b2a6eb0db19b8abba4" exitCode=0 Mar 20 08:48:22.932800 master-0 kubenswrapper[7476]: I0320 08:48:22.932282 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerDied","Data":"a1b4eceb0f2328786d0d5d45adc257b068090b4a532ca9b2a6eb0db19b8abba4"} Mar 20 08:48:22.932800 master-0 kubenswrapper[7476]: I0320 08:48:22.932778 7476 scope.go:117] "RemoveContainer" containerID="a1b4eceb0f2328786d0d5d45adc257b068090b4a532ca9b2a6eb0db19b8abba4" Mar 20 08:48:22.934634 master-0 kubenswrapper[7476]: I0320 08:48:22.934203 7476 generic.go:334] "Generic (PLEG): container finished" podID="f6a6e991-c861-48f5-bfde-78762a037343" containerID="ae2ac69f50f92b147c6e0e54b3efd3a8fd07b958b39ed5539b1432cc17005897" exitCode=0 Mar 20 08:48:22.934634 master-0 kubenswrapper[7476]: I0320 08:48:22.934247 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerDied","Data":"ae2ac69f50f92b147c6e0e54b3efd3a8fd07b958b39ed5539b1432cc17005897"} Mar 20 08:48:22.935498 master-0 kubenswrapper[7476]: I0320 08:48:22.935428 7476 scope.go:117] "RemoveContainer" containerID="ae2ac69f50f92b147c6e0e54b3efd3a8fd07b958b39ed5539b1432cc17005897" Mar 20 08:48:22.936411 master-0 kubenswrapper[7476]: I0320 08:48:22.936386 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/0.log" Mar 20 08:48:22.937032 master-0 kubenswrapper[7476]: I0320 08:48:22.936988 7476 generic.go:334] "Generic (PLEG): container finished" podID="80ddf0a4-e853-4de0-b540-81144dfdd31d" containerID="462a8070d6a4a84bd5f75252bfbebd1aeca669c870c803cc819af47a7fc47625" exitCode=255 Mar 20 08:48:22.937124 master-0 kubenswrapper[7476]: I0320 08:48:22.937086 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerDied","Data":"462a8070d6a4a84bd5f75252bfbebd1aeca669c870c803cc819af47a7fc47625"} Mar 20 08:48:22.937679 master-0 kubenswrapper[7476]: I0320 08:48:22.937638 7476 scope.go:117] "RemoveContainer" containerID="462a8070d6a4a84bd5f75252bfbebd1aeca669c870c803cc819af47a7fc47625" Mar 20 08:48:22.939329 master-0 kubenswrapper[7476]: I0320 08:48:22.939303 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/0.log" Mar 20 08:48:22.939927 master-0 kubenswrapper[7476]: I0320 08:48:22.939876 7476 generic.go:334] "Generic (PLEG): container finished" podID="2d125bc5-08ce-434a-bde7-0ba8fc0169ea" containerID="7c71ba6860012685e763d6be0a28f9f4eedf51541e431293b43883fadda65c94" exitCode=255 Mar 20 08:48:22.940004 master-0 kubenswrapper[7476]: I0320 08:48:22.939973 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerDied","Data":"7c71ba6860012685e763d6be0a28f9f4eedf51541e431293b43883fadda65c94"} Mar 20 08:48:22.940614 master-0 kubenswrapper[7476]: I0320 08:48:22.940570 7476 scope.go:117] "RemoveContainer" containerID="7c71ba6860012685e763d6be0a28f9f4eedf51541e431293b43883fadda65c94" Mar 20 08:48:22.943632 master-0 kubenswrapper[7476]: I0320 08:48:22.943590 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerStarted","Data":"a4506cf0f6e726afbe8cf8c9e90673480cf1d2ed376fa06f37ff1cc988603b59"} Mar 20 08:48:22.945235 master-0 kubenswrapper[7476]: I0320 08:48:22.945202 7476 generic.go:334] "Generic (PLEG): container finished" podID="e9c0293a-5340-4ebe-bc8f-43e78ba9f280" containerID="a7ec1ed13e0a355d823b781b053862cbbd8f7a00b211a40b600daee7dc545186" exitCode=0 Mar 20 08:48:22.945358 master-0 kubenswrapper[7476]: I0320 08:48:22.945279 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerDied","Data":"a7ec1ed13e0a355d823b781b053862cbbd8f7a00b211a40b600daee7dc545186"} Mar 20 08:48:22.945651 master-0 kubenswrapper[7476]: I0320 08:48:22.945629 7476 scope.go:117] "RemoveContainer" containerID="a7ec1ed13e0a355d823b781b053862cbbd8f7a00b211a40b600daee7dc545186" Mar 20 08:48:22.948187 master-0 kubenswrapper[7476]: I0320 08:48:22.948150 7476 generic.go:334] "Generic (PLEG): container finished" podID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerID="bd1d0759a3b11f191f5c7889c156ee6e269182c73bbc7176808f512fb2f1ec9d" exitCode=0 Mar 20 08:48:22.948378 master-0 kubenswrapper[7476]: I0320 08:48:22.948199 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerDied","Data":"bd1d0759a3b11f191f5c7889c156ee6e269182c73bbc7176808f512fb2f1ec9d"} Mar 20 08:48:22.948516 master-0 kubenswrapper[7476]: I0320 08:48:22.948501 7476 scope.go:117] "RemoveContainer" containerID="11101104c4ad4a824dc013fe0f577cb2a24b3336015a3fb27c1b6da8054e07d4" Mar 20 08:48:22.949443 master-0 kubenswrapper[7476]: I0320 08:48:22.949405 7476 scope.go:117] "RemoveContainer" containerID="bd1d0759a3b11f191f5c7889c156ee6e269182c73bbc7176808f512fb2f1ec9d" Mar 20 08:48:22.950953 master-0 kubenswrapper[7476]: I0320 08:48:22.950862 7476 generic.go:334] "Generic (PLEG): container finished" podID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerID="cbb3f75129c9d64cd795c59facd72277d5aa4e6c03360f86cd3b579cb2e915c3" exitCode=0 Mar 20 08:48:22.951108 master-0 kubenswrapper[7476]: I0320 08:48:22.950981 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerDied","Data":"cbb3f75129c9d64cd795c59facd72277d5aa4e6c03360f86cd3b579cb2e915c3"} Mar 20 08:48:22.951648 master-0 kubenswrapper[7476]: I0320 08:48:22.951603 7476 scope.go:117] "RemoveContainer" containerID="cbb3f75129c9d64cd795c59facd72277d5aa4e6c03360f86cd3b579cb2e915c3" Mar 20 08:48:22.957629 master-0 kubenswrapper[7476]: I0320 08:48:22.957588 7476 generic.go:334] "Generic (PLEG): container finished" podID="fec3170d-3f3e-42f5-b20a-da53721c0dac" containerID="36c39e8f2f6bf69b1f66f4972d7671c5d3fca0023fda940a2b8538766e8e200d" exitCode=0 Mar 20 08:48:22.957845 master-0 kubenswrapper[7476]: I0320 08:48:22.957696 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerDied","Data":"36c39e8f2f6bf69b1f66f4972d7671c5d3fca0023fda940a2b8538766e8e200d"} Mar 20 08:48:22.958582 master-0 kubenswrapper[7476]: I0320 08:48:22.958535 7476 scope.go:117] "RemoveContainer" containerID="36c39e8f2f6bf69b1f66f4972d7671c5d3fca0023fda940a2b8538766e8e200d" Mar 20 08:48:22.961377 master-0 kubenswrapper[7476]: I0320 08:48:22.961331 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerStarted","Data":"bdf77adf82af986123c5cdbd1878d0f52d362bf1971f8f8cc55c1368284c4f5f"} Mar 20 08:48:22.964680 master-0 kubenswrapper[7476]: I0320 08:48:22.964627 7476 generic.go:334] "Generic (PLEG): container finished" podID="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" containerID="eb83d7b52ee34a208a7d7d8320582445204a3a3c9a564d3c4ad584270b43c58c" exitCode=0 Mar 20 08:48:22.964767 master-0 kubenswrapper[7476]: I0320 08:48:22.964725 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerDied","Data":"eb83d7b52ee34a208a7d7d8320582445204a3a3c9a564d3c4ad584270b43c58c"} Mar 20 08:48:22.965306 master-0 kubenswrapper[7476]: I0320 08:48:22.965235 7476 scope.go:117] "RemoveContainer" containerID="eb83d7b52ee34a208a7d7d8320582445204a3a3c9a564d3c4ad584270b43c58c" Mar 20 08:48:22.967689 master-0 kubenswrapper[7476]: I0320 08:48:22.967661 7476 generic.go:334] "Generic (PLEG): container finished" podID="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" containerID="9b538e53e002b24081578246c7d675b101b228304a8e87c5077457c1455c343d" exitCode=0 Mar 20 08:48:22.967778 master-0 kubenswrapper[7476]: I0320 08:48:22.967701 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerDied","Data":"9b538e53e002b24081578246c7d675b101b228304a8e87c5077457c1455c343d"} Mar 20 08:48:22.968301 master-0 kubenswrapper[7476]: I0320 08:48:22.968224 7476 scope.go:117] "RemoveContainer" containerID="9b538e53e002b24081578246c7d675b101b228304a8e87c5077457c1455c343d" Mar 20 08:48:23.016543 master-0 kubenswrapper[7476]: I0320 08:48:23.016035 7476 scope.go:117] "RemoveContainer" containerID="d29e51560cf0f82adb12fe4ccbcc9c856b09e06e9a9c7dd6333b272f62625fb3" Mar 20 08:48:23.016867 master-0 kubenswrapper[7476]: I0320 08:48:23.016830 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:23.016867 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:23.016867 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:23.016867 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:23.017014 master-0 kubenswrapper[7476]: I0320 08:48:23.016883 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:23.128713 master-0 kubenswrapper[7476]: I0320 08:48:23.128611 7476 scope.go:117] "RemoveContainer" containerID="606e62ca34e3d9e1001d8f531baa40a69abd238341d65870685ec9240a1791b0" Mar 20 08:48:23.200808 master-0 kubenswrapper[7476]: I0320 08:48:23.200778 7476 scope.go:117] "RemoveContainer" containerID="3fbcbabe96d1d538208df7fe6740297e7b936fd21409b810c6def759b3cb8301" Mar 20 08:48:23.238486 master-0 kubenswrapper[7476]: I0320 08:48:23.237876 7476 scope.go:117] "RemoveContainer" containerID="ae08cd7d4b99291a81168cf2f99395c5e971d107dc0502f7bea648e012bdeade" Mar 20 08:48:23.979766 master-0 kubenswrapper[7476]: I0320 08:48:23.979669 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerStarted","Data":"385c2843df6d2571f0639723dbc7f7f479a4f4fa1607104835b240f51f444467"} Mar 20 08:48:23.982331 master-0 kubenswrapper[7476]: I0320 08:48:23.982234 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerStarted","Data":"aa25d7e3f5d62bcd63da255d522829c6196c34440f15366acc71e4890e98fd5c"} Mar 20 08:48:23.984382 master-0 kubenswrapper[7476]: I0320 08:48:23.984305 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerStarted","Data":"dfdc4b94584bfe91fceba0b4003dbc4b0093c6ad0366472d7fafe8f570e3cfb9"} Mar 20 08:48:23.986239 master-0 kubenswrapper[7476]: I0320 08:48:23.986163 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerStarted","Data":"76181bda8b0461451532ad4e02386833ee9734fb65df65a856a237e3dc22fff5"} Mar 20 08:48:23.989592 master-0 kubenswrapper[7476]: I0320 08:48:23.989517 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/0.log" Mar 20 08:48:23.990244 master-0 kubenswrapper[7476]: I0320 08:48:23.990167 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"043be14a032d81fccfcece7242ed5d72370383c47bc3fa313fb28191f79246e0"} Mar 20 08:48:23.992836 master-0 kubenswrapper[7476]: I0320 08:48:23.992743 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/0.log" Mar 20 08:48:23.993325 master-0 kubenswrapper[7476]: I0320 08:48:23.993285 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"38ff2aa460824904a5715b2a8594c19ce1e116c5bdd552d7c90a8ae16b6aad9d"} Mar 20 08:48:23.996478 master-0 kubenswrapper[7476]: I0320 08:48:23.996388 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerStarted","Data":"173da4e7be06cca34fcb84231efee897dd1fd16593112fe5c00528ddc53e0f96"} Mar 20 08:48:23.999580 master-0 kubenswrapper[7476]: I0320 08:48:23.999518 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f"} Mar 20 08:48:24.000020 master-0 kubenswrapper[7476]: I0320 08:48:23.999973 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:48:24.002736 master-0 kubenswrapper[7476]: I0320 08:48:24.002691 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"158152aec5255c0c0f30836ac85f1459094c2aa62d522d1d07878c2186af6949"} Mar 20 08:48:24.005500 master-0 kubenswrapper[7476]: I0320 08:48:24.005429 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerStarted","Data":"d27bdc77827c5790d21f30a5be51defed98d4177328db7f60819cdef6d3d4084"} Mar 20 08:48:24.017014 master-0 kubenswrapper[7476]: I0320 08:48:24.016959 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:24.017014 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:24.017014 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:24.017014 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:24.017238 master-0 kubenswrapper[7476]: I0320 08:48:24.017040 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:25.017739 master-0 kubenswrapper[7476]: I0320 08:48:25.017666 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:25.017739 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:25.017739 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:25.017739 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:25.018704 master-0 kubenswrapper[7476]: I0320 08:48:25.017774 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:26.017358 master-0 kubenswrapper[7476]: I0320 08:48:26.017177 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:26.017358 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:26.017358 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:26.017358 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:26.017358 master-0 kubenswrapper[7476]: I0320 08:48:26.017294 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:26.022570 master-0 kubenswrapper[7476]: I0320 08:48:26.022504 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/4.log" Mar 20 08:48:26.023953 master-0 kubenswrapper[7476]: I0320 08:48:26.023848 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/3.log" Mar 20 08:48:26.024686 master-0 kubenswrapper[7476]: I0320 08:48:26.024609 7476 generic.go:334] "Generic (PLEG): container finished" podID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" exitCode=1 Mar 20 08:48:26.024813 master-0 kubenswrapper[7476]: I0320 08:48:26.024666 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerDied","Data":"3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2"} Mar 20 08:48:26.024813 master-0 kubenswrapper[7476]: I0320 08:48:26.024766 7476 scope.go:117] "RemoveContainer" containerID="4557628b4e1a86ee2671291620562da3ce234a1e5a65125b7811c20080db0e77" Mar 20 08:48:26.025546 master-0 kubenswrapper[7476]: I0320 08:48:26.025501 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:48:26.025926 master-0 kubenswrapper[7476]: E0320 08:48:26.025874 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:48:27.016426 master-0 kubenswrapper[7476]: I0320 08:48:27.016335 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:27.016426 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:27.016426 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:27.016426 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:27.016729 master-0 kubenswrapper[7476]: I0320 08:48:27.016429 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:27.034032 master-0 kubenswrapper[7476]: I0320 08:48:27.033932 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/4.log" Mar 20 08:48:27.236969 master-0 kubenswrapper[7476]: I0320 08:48:27.236894 7476 scope.go:117] "RemoveContainer" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" Mar 20 08:48:27.462113 master-0 kubenswrapper[7476]: I0320 08:48:27.462052 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:48:27.462219 master-0 kubenswrapper[7476]: I0320 08:48:27.462127 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:48:28.018807 master-0 kubenswrapper[7476]: I0320 08:48:28.018692 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:28.018807 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:28.018807 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:28.018807 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:28.018807 master-0 kubenswrapper[7476]: I0320 08:48:28.018792 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:28.048144 master-0 kubenswrapper[7476]: I0320 08:48:28.048029 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/3.log" Mar 20 08:48:28.048144 master-0 kubenswrapper[7476]: I0320 08:48:28.048130 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"012a5b768c0ee0c2fea0a0efdf9099347e45d4700bf345a081f7cefcb6ff719b"} Mar 20 08:48:28.412166 master-0 kubenswrapper[7476]: I0320 08:48:28.412086 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:48:28.412413 master-0 kubenswrapper[7476]: I0320 08:48:28.412172 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:48:29.016202 master-0 kubenswrapper[7476]: I0320 08:48:29.016105 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:29.016202 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:29.016202 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:29.016202 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:29.016202 master-0 kubenswrapper[7476]: I0320 08:48:29.016171 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:30.018013 master-0 kubenswrapper[7476]: I0320 08:48:30.017871 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:30.018013 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:30.018013 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:30.018013 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:30.018013 master-0 kubenswrapper[7476]: I0320 08:48:30.017977 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:30.461853 master-0 kubenswrapper[7476]: I0320 08:48:30.461788 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:48:30.462074 master-0 kubenswrapper[7476]: I0320 08:48:30.461875 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:48:31.016954 master-0 kubenswrapper[7476]: I0320 08:48:31.016844 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:31.016954 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:31.016954 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:31.016954 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:31.016954 master-0 kubenswrapper[7476]: I0320 08:48:31.016941 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:31.237166 master-0 kubenswrapper[7476]: I0320 08:48:31.237115 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:48:31.413317 master-0 kubenswrapper[7476]: I0320 08:48:31.412146 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:48:31.413317 master-0 kubenswrapper[7476]: I0320 08:48:31.412205 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:48:31.413317 master-0 kubenswrapper[7476]: I0320 08:48:31.412285 7476 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:48:31.413317 master-0 kubenswrapper[7476]: I0320 08:48:31.412903 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f"} pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 20 08:48:31.413317 master-0 kubenswrapper[7476]: I0320 08:48:31.412932 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" containerID="cri-o://1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f" gracePeriod=30 Mar 20 08:48:31.413719 master-0 kubenswrapper[7476]: I0320 08:48:31.413497 7476 patch_prober.go:28] interesting pod/openshift-config-operator-95bf4f4d-25jrp container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" start-of-body= Mar 20 08:48:31.413719 master-0 kubenswrapper[7476]: I0320 08:48:31.413563 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.22:8443/healthz\": dial tcp 10.128.0.22:8443: connect: connection refused" Mar 20 08:48:31.828721 master-0 kubenswrapper[7476]: E0320 08:48:31.828641 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-25jrp_openshift-config-operator(3065e4b4-4493-41ce-b9d2-89315475f74f)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" Mar 20 08:48:32.016695 master-0 kubenswrapper[7476]: I0320 08:48:32.016546 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:32.016695 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:32.016695 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:32.016695 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:32.016695 master-0 kubenswrapper[7476]: I0320 08:48:32.016654 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:32.080603 master-0 kubenswrapper[7476]: I0320 08:48:32.080512 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-config-operator/2.log" Mar 20 08:48:32.081723 master-0 kubenswrapper[7476]: I0320 08:48:32.081651 7476 generic.go:334] "Generic (PLEG): container finished" podID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerID="1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f" exitCode=255 Mar 20 08:48:32.081878 master-0 kubenswrapper[7476]: I0320 08:48:32.081724 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerDied","Data":"1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f"} Mar 20 08:48:32.081878 master-0 kubenswrapper[7476]: I0320 08:48:32.081778 7476 scope.go:117] "RemoveContainer" containerID="bd1d0759a3b11f191f5c7889c156ee6e269182c73bbc7176808f512fb2f1ec9d" Mar 20 08:48:32.082571 master-0 kubenswrapper[7476]: I0320 08:48:32.082509 7476 scope.go:117] "RemoveContainer" containerID="1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f" Mar 20 08:48:32.083038 master-0 kubenswrapper[7476]: E0320 08:48:32.082971 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-config-operator pod=openshift-config-operator-95bf4f4d-25jrp_openshift-config-operator(3065e4b4-4493-41ce-b9d2-89315475f74f)\"" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" podUID="3065e4b4-4493-41ce-b9d2-89315475f74f" Mar 20 08:48:32.084650 master-0 kubenswrapper[7476]: I0320 08:48:32.084597 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:48:32.087768 master-0 kubenswrapper[7476]: I0320 08:48:32.087678 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:48:32.087768 master-0 kubenswrapper[7476]: I0320 08:48:32.087749 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c"} Mar 20 08:48:32.752451 master-0 kubenswrapper[7476]: E0320 08:48:32.752371 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:48:33.017020 master-0 kubenswrapper[7476]: I0320 08:48:33.016837 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:33.017020 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:33.017020 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:33.017020 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:33.017020 master-0 kubenswrapper[7476]: I0320 08:48:33.016933 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:33.097626 master-0 kubenswrapper[7476]: I0320 08:48:33.097563 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-config-operator/2.log" Mar 20 08:48:33.779308 master-0 kubenswrapper[7476]: I0320 08:48:33.779151 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:48:33.779308 master-0 kubenswrapper[7476]: I0320 08:48:33.779258 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:48:34.017242 master-0 kubenswrapper[7476]: I0320 08:48:34.017127 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:34.017242 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:34.017242 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:34.017242 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:34.017242 master-0 kubenswrapper[7476]: I0320 08:48:34.017211 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:35.017510 master-0 kubenswrapper[7476]: I0320 08:48:35.017446 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:35.017510 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:35.017510 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:35.017510 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:35.018149 master-0 kubenswrapper[7476]: I0320 08:48:35.017547 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:35.396807 master-0 kubenswrapper[7476]: I0320 08:48:35.396711 7476 patch_prober.go:28] interesting pod/etcd-operator-8544cbcf9c-7x9vq container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" start-of-body= Mar 20 08:48:35.396807 master-0 kubenswrapper[7476]: I0320 08:48:35.396798 7476 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" podUID="fec3170d-3f3e-42f5-b20a-da53721c0dac" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.24:8443/healthz\": dial tcp 10.128.0.24:8443: connect: connection refused" Mar 20 08:48:35.753518 master-0 kubenswrapper[7476]: E0320 08:48:35.753400 7476 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 20 08:48:36.017702 master-0 kubenswrapper[7476]: I0320 08:48:36.017544 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:36.017702 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:36.017702 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:36.017702 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:36.017702 master-0 kubenswrapper[7476]: I0320 08:48:36.017643 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:36.779886 master-0 kubenswrapper[7476]: I0320 08:48:36.779795 7476 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 08:48:36.780210 master-0 kubenswrapper[7476]: I0320 08:48:36.779903 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 08:48:37.017182 master-0 kubenswrapper[7476]: I0320 08:48:37.017120 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:37.017182 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:37.017182 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:37.017182 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:37.017641 master-0 kubenswrapper[7476]: I0320 08:48:37.017203 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:38.017467 master-0 kubenswrapper[7476]: I0320 08:48:38.017390 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:38.017467 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:38.017467 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:38.017467 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:38.017467 master-0 kubenswrapper[7476]: I0320 08:48:38.017459 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:39.016348 master-0 kubenswrapper[7476]: I0320 08:48:39.016291 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:39.016348 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:39.016348 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:39.016348 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:39.016348 master-0 kubenswrapper[7476]: I0320 08:48:39.016373 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:40.017716 master-0 kubenswrapper[7476]: I0320 08:48:40.017544 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:40.017716 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:40.017716 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:40.017716 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:40.017716 master-0 kubenswrapper[7476]: I0320 08:48:40.017652 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:41.018135 master-0 kubenswrapper[7476]: I0320 08:48:41.018079 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:41.018135 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:41.018135 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:41.018135 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:41.018943 master-0 kubenswrapper[7476]: I0320 08:48:41.018153 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:41.236949 master-0 kubenswrapper[7476]: I0320 08:48:41.236877 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:48:41.237367 master-0 kubenswrapper[7476]: E0320 08:48:41.237312 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:48:42.016233 master-0 kubenswrapper[7476]: I0320 08:48:42.016190 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:42.016233 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:42.016233 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:42.016233 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:42.016623 master-0 kubenswrapper[7476]: I0320 08:48:42.016252 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:42.752856 master-0 kubenswrapper[7476]: E0320 08:48:42.752808 7476 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:48:42.752856 master-0 kubenswrapper[7476]: E0320 08:48:42.752846 7476 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:48:43.017265 master-0 kubenswrapper[7476]: I0320 08:48:43.017155 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:43.017265 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:43.017265 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:43.017265 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:43.017265 master-0 kubenswrapper[7476]: I0320 08:48:43.017226 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:43.784829 master-0 kubenswrapper[7476]: I0320 08:48:43.784776 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:48:43.789117 master-0 kubenswrapper[7476]: I0320 08:48:43.789068 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:48:44.016651 master-0 kubenswrapper[7476]: I0320 08:48:44.016593 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:44.016651 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:44.016651 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:44.016651 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:44.016892 master-0 kubenswrapper[7476]: I0320 08:48:44.016659 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:45.034303 master-0 kubenswrapper[7476]: I0320 08:48:45.024493 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:45.034303 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:45.034303 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:45.034303 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:45.034303 master-0 kubenswrapper[7476]: I0320 08:48:45.024774 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:46.015794 master-0 kubenswrapper[7476]: I0320 08:48:46.015732 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:46.015794 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:46.015794 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:46.015794 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:46.016101 master-0 kubenswrapper[7476]: I0320 08:48:46.015802 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:46.237570 master-0 kubenswrapper[7476]: I0320 08:48:46.237475 7476 scope.go:117] "RemoveContainer" containerID="1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f" Mar 20 08:48:47.017462 master-0 kubenswrapper[7476]: I0320 08:48:47.017310 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:47.017462 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:47.017462 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:47.017462 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:47.017462 master-0 kubenswrapper[7476]: I0320 08:48:47.017401 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:48.017222 master-0 kubenswrapper[7476]: I0320 08:48:48.017062 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:48.017222 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:48.017222 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:48.017222 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:48.017222 master-0 kubenswrapper[7476]: I0320 08:48:48.017164 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:48.222187 master-0 kubenswrapper[7476]: I0320 08:48:48.222138 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-config-operator/2.log" Mar 20 08:48:48.223263 master-0 kubenswrapper[7476]: I0320 08:48:48.223200 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"59ba7bd4aa39cee5c1de95c7109004a23d309869d6116da7e8f294aa326ed6b0"} Mar 20 08:48:49.017481 master-0 kubenswrapper[7476]: I0320 08:48:49.017412 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:49.017481 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:49.017481 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:49.017481 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:49.018082 master-0 kubenswrapper[7476]: I0320 08:48:49.017493 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:49.228551 master-0 kubenswrapper[7476]: I0320 08:48:49.228508 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:48:50.017289 master-0 kubenswrapper[7476]: I0320 08:48:50.017147 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:50.017289 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:50.017289 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:50.017289 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:50.017289 master-0 kubenswrapper[7476]: I0320 08:48:50.017231 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:50.240760 master-0 kubenswrapper[7476]: I0320 08:48:50.240690 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:48:51.016165 master-0 kubenswrapper[7476]: I0320 08:48:51.016119 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:51.016165 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:51.016165 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:51.016165 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:51.016165 master-0 kubenswrapper[7476]: I0320 08:48:51.016172 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:51.042451 master-0 kubenswrapper[7476]: I0320 08:48:51.042342 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 20 08:48:51.179079 master-0 kubenswrapper[7476]: I0320 08:48:51.179028 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 20 08:48:51.254068 master-0 kubenswrapper[7476]: I0320 08:48:51.254009 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" path="/var/lib/kubelet/pods/76ccbbad-62cd-4fdd-8a22-3299f9ef3b42/volumes" Mar 20 08:48:51.342068 master-0 kubenswrapper[7476]: I0320 08:48:51.341748 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b46449cf-9fccc"] Mar 20 08:48:51.342068 master-0 kubenswrapper[7476]: I0320 08:48:51.341994 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" containerID="cri-o://7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e" gracePeriod=30 Mar 20 08:48:51.569858 master-0 kubenswrapper[7476]: I0320 08:48:51.569808 7476 patch_prober.go:28] interesting pod/controller-manager-65b46449cf-9fccc container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" start-of-body= Mar 20 08:48:51.570139 master-0 kubenswrapper[7476]: I0320 08:48:51.570109 7476 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.50:8443/healthz\": dial tcp 10.128.0.50:8443: connect: connection refused" Mar 20 08:48:51.623955 master-0 kubenswrapper[7476]: I0320 08:48:51.623892 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8488874649-cdk48"] Mar 20 08:48:51.624310 master-0 kubenswrapper[7476]: I0320 08:48:51.624228 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" podUID="f67db558-998e-48e3-9b55-b96029ec000c" containerName="route-controller-manager" containerID="cri-o://7e142c46726d66a9f4af952931f5f0ca34fe7b5fddc119c7c4f10f57df64fee8" gracePeriod=30 Mar 20 08:48:51.995087 master-0 kubenswrapper[7476]: I0320 08:48:51.995034 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:48:52.016363 master-0 kubenswrapper[7476]: I0320 08:48:52.016296 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:52.016363 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:52.016363 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:52.016363 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:52.016629 master-0 kubenswrapper[7476]: I0320 08:48:52.016383 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:52.130082 master-0 kubenswrapper[7476]: I0320 08:48:52.129931 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c200f016-3922-4e90-9061-92fd8c3fad2b-serving-cert\") pod \"c200f016-3922-4e90-9061-92fd8c3fad2b\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " Mar 20 08:48:52.130082 master-0 kubenswrapper[7476]: I0320 08:48:52.130028 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-proxy-ca-bundles\") pod \"c200f016-3922-4e90-9061-92fd8c3fad2b\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " Mar 20 08:48:52.130082 master-0 kubenswrapper[7476]: I0320 08:48:52.130074 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-client-ca\") pod \"c200f016-3922-4e90-9061-92fd8c3fad2b\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " Mar 20 08:48:52.131058 master-0 kubenswrapper[7476]: I0320 08:48:52.130162 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-config\") pod \"c200f016-3922-4e90-9061-92fd8c3fad2b\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " Mar 20 08:48:52.131058 master-0 kubenswrapper[7476]: I0320 08:48:52.130216 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnkjm\" (UniqueName: \"kubernetes.io/projected/c200f016-3922-4e90-9061-92fd8c3fad2b-kube-api-access-cnkjm\") pod \"c200f016-3922-4e90-9061-92fd8c3fad2b\" (UID: \"c200f016-3922-4e90-9061-92fd8c3fad2b\") " Mar 20 08:48:52.131058 master-0 kubenswrapper[7476]: I0320 08:48:52.130928 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c200f016-3922-4e90-9061-92fd8c3fad2b" (UID: "c200f016-3922-4e90-9061-92fd8c3fad2b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:52.132030 master-0 kubenswrapper[7476]: I0320 08:48:52.131974 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-client-ca" (OuterVolumeSpecName: "client-ca") pod "c200f016-3922-4e90-9061-92fd8c3fad2b" (UID: "c200f016-3922-4e90-9061-92fd8c3fad2b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:52.132155 master-0 kubenswrapper[7476]: I0320 08:48:52.132066 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-config" (OuterVolumeSpecName: "config") pod "c200f016-3922-4e90-9061-92fd8c3fad2b" (UID: "c200f016-3922-4e90-9061-92fd8c3fad2b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:52.134881 master-0 kubenswrapper[7476]: I0320 08:48:52.134255 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c200f016-3922-4e90-9061-92fd8c3fad2b-kube-api-access-cnkjm" (OuterVolumeSpecName: "kube-api-access-cnkjm") pod "c200f016-3922-4e90-9061-92fd8c3fad2b" (UID: "c200f016-3922-4e90-9061-92fd8c3fad2b"). InnerVolumeSpecName "kube-api-access-cnkjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:52.134881 master-0 kubenswrapper[7476]: I0320 08:48:52.134525 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c200f016-3922-4e90-9061-92fd8c3fad2b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c200f016-3922-4e90-9061-92fd8c3fad2b" (UID: "c200f016-3922-4e90-9061-92fd8c3fad2b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:52.231709 master-0 kubenswrapper[7476]: I0320 08:48:52.231652 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c200f016-3922-4e90-9061-92fd8c3fad2b-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:52.231709 master-0 kubenswrapper[7476]: I0320 08:48:52.231689 7476 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:52.231709 master-0 kubenswrapper[7476]: I0320 08:48:52.231700 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:52.231709 master-0 kubenswrapper[7476]: I0320 08:48:52.231708 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c200f016-3922-4e90-9061-92fd8c3fad2b-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:52.231709 master-0 kubenswrapper[7476]: I0320 08:48:52.231717 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnkjm\" (UniqueName: \"kubernetes.io/projected/c200f016-3922-4e90-9061-92fd8c3fad2b-kube-api-access-cnkjm\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:52.236867 master-0 kubenswrapper[7476]: I0320 08:48:52.236817 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:48:52.237086 master-0 kubenswrapper[7476]: E0320 08:48:52.237054 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:48:52.259069 master-0 kubenswrapper[7476]: I0320 08:48:52.259015 7476 generic.go:334] "Generic (PLEG): container finished" podID="f67db558-998e-48e3-9b55-b96029ec000c" containerID="7e142c46726d66a9f4af952931f5f0ca34fe7b5fddc119c7c4f10f57df64fee8" exitCode=0 Mar 20 08:48:52.259294 master-0 kubenswrapper[7476]: I0320 08:48:52.259098 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" event={"ID":"f67db558-998e-48e3-9b55-b96029ec000c","Type":"ContainerDied","Data":"7e142c46726d66a9f4af952931f5f0ca34fe7b5fddc119c7c4f10f57df64fee8"} Mar 20 08:48:52.261264 master-0 kubenswrapper[7476]: I0320 08:48:52.261230 7476 generic.go:334] "Generic (PLEG): container finished" podID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerID="7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e" exitCode=0 Mar 20 08:48:52.261345 master-0 kubenswrapper[7476]: I0320 08:48:52.261286 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerDied","Data":"7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e"} Mar 20 08:48:52.261345 master-0 kubenswrapper[7476]: I0320 08:48:52.261309 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" event={"ID":"c200f016-3922-4e90-9061-92fd8c3fad2b","Type":"ContainerDied","Data":"e4fba2632a8ff841c8486ac8a6e820628bb0ebb1d21ae56e7fae136ec118d2c7"} Mar 20 08:48:52.261345 master-0 kubenswrapper[7476]: I0320 08:48:52.261332 7476 scope.go:117] "RemoveContainer" containerID="7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e" Mar 20 08:48:52.261486 master-0 kubenswrapper[7476]: I0320 08:48:52.261463 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b46449cf-9fccc" Mar 20 08:48:52.405789 master-0 kubenswrapper[7476]: I0320 08:48:52.405462 7476 scope.go:117] "RemoveContainer" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" Mar 20 08:48:52.449219 master-0 kubenswrapper[7476]: I0320 08:48:52.449180 7476 scope.go:117] "RemoveContainer" containerID="7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e" Mar 20 08:48:52.454075 master-0 kubenswrapper[7476]: E0320 08:48:52.450354 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e\": container with ID starting with 7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e not found: ID does not exist" containerID="7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e" Mar 20 08:48:52.454075 master-0 kubenswrapper[7476]: I0320 08:48:52.450397 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e"} err="failed to get container status \"7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e\": rpc error: code = NotFound desc = could not find container \"7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e\": container with ID starting with 7ee96cd8cd8c312aa8b33e1833e8a4d908946d2e36d893d8314f3560d327a84e not found: ID does not exist" Mar 20 08:48:52.454075 master-0 kubenswrapper[7476]: I0320 08:48:52.450420 7476 scope.go:117] "RemoveContainer" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" Mar 20 08:48:52.454075 master-0 kubenswrapper[7476]: E0320 08:48:52.450637 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b\": container with ID starting with b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b not found: ID does not exist" containerID="b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b" Mar 20 08:48:52.454075 master-0 kubenswrapper[7476]: I0320 08:48:52.450653 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b"} err="failed to get container status \"b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b\": rpc error: code = NotFound desc = could not find container \"b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b\": container with ID starting with b3eca12faf8457705f48e50666d3636087af70a19b92da4128f308d07a0de23b not found: ID does not exist" Mar 20 08:48:52.535778 master-0 kubenswrapper[7476]: I0320 08:48:52.535702 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b46449cf-9fccc"] Mar 20 08:48:52.789383 master-0 kubenswrapper[7476]: I0320 08:48:52.788529 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b46449cf-9fccc"] Mar 20 08:48:52.878533 master-0 kubenswrapper[7476]: I0320 08:48:52.878492 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:48:52.945673 master-0 kubenswrapper[7476]: I0320 08:48:52.945590 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-client-ca\") pod \"f67db558-998e-48e3-9b55-b96029ec000c\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " Mar 20 08:48:52.945673 master-0 kubenswrapper[7476]: I0320 08:48:52.945678 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4lpr\" (UniqueName: \"kubernetes.io/projected/f67db558-998e-48e3-9b55-b96029ec000c-kube-api-access-j4lpr\") pod \"f67db558-998e-48e3-9b55-b96029ec000c\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " Mar 20 08:48:52.946130 master-0 kubenswrapper[7476]: I0320 08:48:52.945752 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-config\") pod \"f67db558-998e-48e3-9b55-b96029ec000c\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " Mar 20 08:48:52.946130 master-0 kubenswrapper[7476]: I0320 08:48:52.945831 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f67db558-998e-48e3-9b55-b96029ec000c-serving-cert\") pod \"f67db558-998e-48e3-9b55-b96029ec000c\" (UID: \"f67db558-998e-48e3-9b55-b96029ec000c\") " Mar 20 08:48:52.946406 master-0 kubenswrapper[7476]: I0320 08:48:52.946323 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-client-ca" (OuterVolumeSpecName: "client-ca") pod "f67db558-998e-48e3-9b55-b96029ec000c" (UID: "f67db558-998e-48e3-9b55-b96029ec000c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:52.946661 master-0 kubenswrapper[7476]: I0320 08:48:52.946594 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-config" (OuterVolumeSpecName: "config") pod "f67db558-998e-48e3-9b55-b96029ec000c" (UID: "f67db558-998e-48e3-9b55-b96029ec000c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:52.967427 master-0 kubenswrapper[7476]: I0320 08:48:52.948283 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f67db558-998e-48e3-9b55-b96029ec000c-kube-api-access-j4lpr" (OuterVolumeSpecName: "kube-api-access-j4lpr") pod "f67db558-998e-48e3-9b55-b96029ec000c" (UID: "f67db558-998e-48e3-9b55-b96029ec000c"). InnerVolumeSpecName "kube-api-access-j4lpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:52.967427 master-0 kubenswrapper[7476]: I0320 08:48:52.950348 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f67db558-998e-48e3-9b55-b96029ec000c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f67db558-998e-48e3-9b55-b96029ec000c" (UID: "f67db558-998e-48e3-9b55-b96029ec000c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:53.017674 master-0 kubenswrapper[7476]: I0320 08:48:53.017598 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:53.017674 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:53.017674 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:53.017674 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:53.017972 master-0 kubenswrapper[7476]: I0320 08:48:53.017689 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:53.047418 master-0 kubenswrapper[7476]: I0320 08:48:53.047295 7476 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:53.047418 master-0 kubenswrapper[7476]: I0320 08:48:53.047362 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4lpr\" (UniqueName: \"kubernetes.io/projected/f67db558-998e-48e3-9b55-b96029ec000c-kube-api-access-j4lpr\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:53.047418 master-0 kubenswrapper[7476]: I0320 08:48:53.047383 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f67db558-998e-48e3-9b55-b96029ec000c-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:53.047418 master-0 kubenswrapper[7476]: I0320 08:48:53.047401 7476 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f67db558-998e-48e3-9b55-b96029ec000c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:48:53.245942 master-0 kubenswrapper[7476]: I0320 08:48:53.245880 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" path="/var/lib/kubelet/pods/c200f016-3922-4e90-9061-92fd8c3fad2b/volumes" Mar 20 08:48:53.268219 master-0 kubenswrapper[7476]: I0320 08:48:53.268174 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" event={"ID":"f67db558-998e-48e3-9b55-b96029ec000c","Type":"ContainerDied","Data":"9618eb1b1d712759bbe73dc554246ea95720ea5ad03e699ed75c6e4e3e82a275"} Mar 20 08:48:53.268548 master-0 kubenswrapper[7476]: I0320 08:48:53.268207 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8488874649-cdk48" Mar 20 08:48:53.268548 master-0 kubenswrapper[7476]: I0320 08:48:53.268235 7476 scope.go:117] "RemoveContainer" containerID="7e142c46726d66a9f4af952931f5f0ca34fe7b5fddc119c7c4f10f57df64fee8" Mar 20 08:48:53.736525 master-0 kubenswrapper[7476]: I0320 08:48:53.736469 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8488874649-cdk48"] Mar 20 08:48:53.810814 master-0 kubenswrapper[7476]: I0320 08:48:53.810763 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8488874649-cdk48"] Mar 20 08:48:54.017139 master-0 kubenswrapper[7476]: I0320 08:48:54.017021 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:54.017139 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:54.017139 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:54.017139 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:54.017139 master-0 kubenswrapper[7476]: I0320 08:48:54.017116 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:55.017877 master-0 kubenswrapper[7476]: I0320 08:48:55.017808 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:55.017877 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:55.017877 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:55.017877 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:55.018739 master-0 kubenswrapper[7476]: I0320 08:48:55.017895 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:55.245956 master-0 kubenswrapper[7476]: I0320 08:48:55.245873 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f67db558-998e-48e3-9b55-b96029ec000c" path="/var/lib/kubelet/pods/f67db558-998e-48e3-9b55-b96029ec000c/volumes" Mar 20 08:48:56.017181 master-0 kubenswrapper[7476]: I0320 08:48:56.017078 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:56.017181 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:56.017181 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:56.017181 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:56.017181 master-0 kubenswrapper[7476]: I0320 08:48:56.017159 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:57.017592 master-0 kubenswrapper[7476]: I0320 08:48:57.017478 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:57.017592 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:57.017592 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:57.017592 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:57.018563 master-0 kubenswrapper[7476]: I0320 08:48:57.017592 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:58.017568 master-0 kubenswrapper[7476]: I0320 08:48:58.017474 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:58.017568 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:58.017568 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:58.017568 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:58.018553 master-0 kubenswrapper[7476]: I0320 08:48:58.017576 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:59.017717 master-0 kubenswrapper[7476]: I0320 08:48:59.017602 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:48:59.017717 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:48:59.017717 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:48:59.017717 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:48:59.017717 master-0 kubenswrapper[7476]: I0320 08:48:59.017702 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:48:59.341967 master-0 kubenswrapper[7476]: I0320 08:48:59.341818 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:48:59.344873 master-0 kubenswrapper[7476]: I0320 08:48:59.344798 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/0.log" Mar 20 08:48:59.346040 master-0 kubenswrapper[7476]: I0320 08:48:59.345982 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:48:59.346119 master-0 kubenswrapper[7476]: I0320 08:48:59.346082 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" exitCode=1 Mar 20 08:48:59.346169 master-0 kubenswrapper[7476]: I0320 08:48:59.346132 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerDied","Data":"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249"} Mar 20 08:48:59.346944 master-0 kubenswrapper[7476]: I0320 08:48:59.346901 7476 scope.go:117] "RemoveContainer" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:00.016605 master-0 kubenswrapper[7476]: I0320 08:49:00.016488 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:00.016605 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:00.016605 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:00.016605 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:00.016605 master-0 kubenswrapper[7476]: I0320 08:49:00.016566 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:00.355152 master-0 kubenswrapper[7476]: I0320 08:49:00.355101 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:49:00.356770 master-0 kubenswrapper[7476]: I0320 08:49:00.356727 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/0.log" Mar 20 08:49:00.357353 master-0 kubenswrapper[7476]: I0320 08:49:00.357320 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:49:00.357459 master-0 kubenswrapper[7476]: I0320 08:49:00.357386 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"8c753d068f364b16e3aeb8396b7d9f33","Type":"ContainerStarted","Data":"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6"} Mar 20 08:49:01.016369 master-0 kubenswrapper[7476]: I0320 08:49:01.016312 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:01.016369 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:01.016369 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:01.016369 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:01.016720 master-0 kubenswrapper[7476]: I0320 08:49:01.016390 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:02.016424 master-0 kubenswrapper[7476]: I0320 08:49:02.016355 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:02.016424 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:02.016424 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:02.016424 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:02.017569 master-0 kubenswrapper[7476]: I0320 08:49:02.017519 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:03.016103 master-0 kubenswrapper[7476]: I0320 08:49:03.016051 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:03.016103 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:03.016103 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:03.016103 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:03.016465 master-0 kubenswrapper[7476]: I0320 08:49:03.016111 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:03.140111 master-0 kubenswrapper[7476]: I0320 08:49:03.140048 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv"] Mar 20 08:49:03.140453 master-0 kubenswrapper[7476]: E0320 08:49:03.140413 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92600726-933f-41eb-a329-1fcc68dc95c1" containerName="installer" Mar 20 08:49:03.140453 master-0 kubenswrapper[7476]: I0320 08:49:03.140434 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="92600726-933f-41eb-a329-1fcc68dc95c1" containerName="installer" Mar 20 08:49:03.140453 master-0 kubenswrapper[7476]: E0320 08:49:03.140454 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140462 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: E0320 08:49:03.140480 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f67db558-998e-48e3-9b55-b96029ec000c" containerName="route-controller-manager" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140486 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="f67db558-998e-48e3-9b55-b96029ec000c" containerName="route-controller-manager" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: E0320 08:49:03.140494 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26923e70-56a5-4020-8b55-510879ec6fd4" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140500 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="26923e70-56a5-4020-8b55-510879ec6fd4" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: E0320 08:49:03.140512 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140517 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: E0320 08:49:03.140536 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140542 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: E0320 08:49:03.140550 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140556 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: E0320 08:49:03.140569 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" containerName="installer" Mar 20 08:49:03.140603 master-0 kubenswrapper[7476]: I0320 08:49:03.140575 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" containerName="installer" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140675 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="f67db558-998e-48e3-9b55-b96029ec000c" containerName="route-controller-manager" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140691 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerName="installer" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140697 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="26923e70-56a5-4020-8b55-510879ec6fd4" containerName="installer" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140706 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="76ccbbad-62cd-4fdd-8a22-3299f9ef3b42" containerName="installer" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140718 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerName="installer" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140724 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140732 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="92600726-933f-41eb-a329-1fcc68dc95c1" containerName="installer" Mar 20 08:49:03.141084 master-0 kubenswrapper[7476]: I0320 08:49:03.140740 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.141419 master-0 kubenswrapper[7476]: I0320 08:49:03.141165 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.142974 master-0 kubenswrapper[7476]: I0320 08:49:03.142922 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-6tblf" Mar 20 08:49:03.143762 master-0 kubenswrapper[7476]: I0320 08:49:03.143733 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:49:03.143993 master-0 kubenswrapper[7476]: I0320 08:49:03.143969 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:49:03.144346 master-0 kubenswrapper[7476]: I0320 08:49:03.144256 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:49:03.144780 master-0 kubenswrapper[7476]: I0320 08:49:03.144499 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:49:03.144780 master-0 kubenswrapper[7476]: I0320 08:49:03.144365 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:49:03.149999 master-0 kubenswrapper[7476]: I0320 08:49:03.147886 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-bc85986b9-8p79x"] Mar 20 08:49:03.149999 master-0 kubenswrapper[7476]: E0320 08:49:03.148241 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.149999 master-0 kubenswrapper[7476]: I0320 08:49:03.148282 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.149999 master-0 kubenswrapper[7476]: I0320 08:49:03.148445 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c200f016-3922-4e90-9061-92fd8c3fad2b" containerName="controller-manager" Mar 20 08:49:03.149999 master-0 kubenswrapper[7476]: I0320 08:49:03.148979 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.151776 master-0 kubenswrapper[7476]: I0320 08:49:03.151639 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-c9tw2" Mar 20 08:49:03.151776 master-0 kubenswrapper[7476]: I0320 08:49:03.151689 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv"] Mar 20 08:49:03.152181 master-0 kubenswrapper[7476]: I0320 08:49:03.152152 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:49:03.156313 master-0 kubenswrapper[7476]: I0320 08:49:03.154177 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:49:03.165788 master-0 kubenswrapper[7476]: I0320 08:49:03.165731 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:49:03.166007 master-0 kubenswrapper[7476]: I0320 08:49:03.165932 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:49:03.166190 master-0 kubenswrapper[7476]: I0320 08:49:03.166157 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:49:03.172204 master-0 kubenswrapper[7476]: I0320 08:49:03.170604 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:49:03.173938 master-0 kubenswrapper[7476]: I0320 08:49:03.173906 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc85986b9-8p79x"] Mar 20 08:49:03.294913 master-0 kubenswrapper[7476]: I0320 08:49:03.294740 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.294913 master-0 kubenswrapper[7476]: I0320 08:49:03.294822 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.294913 master-0 kubenswrapper[7476]: I0320 08:49:03.294890 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.294913 master-0 kubenswrapper[7476]: I0320 08:49:03.294915 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.295230 master-0 kubenswrapper[7476]: I0320 08:49:03.294950 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.295230 master-0 kubenswrapper[7476]: I0320 08:49:03.294982 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.295230 master-0 kubenswrapper[7476]: I0320 08:49:03.295003 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.295230 master-0 kubenswrapper[7476]: I0320 08:49:03.295027 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.295230 master-0 kubenswrapper[7476]: I0320 08:49:03.295051 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.396443 master-0 kubenswrapper[7476]: I0320 08:49:03.396370 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.396443 master-0 kubenswrapper[7476]: I0320 08:49:03.396439 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396489 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396506 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396538 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396564 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396579 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396595 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.396679 master-0 kubenswrapper[7476]: I0320 08:49:03.396613 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.397869 master-0 kubenswrapper[7476]: I0320 08:49:03.397837 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.398099 master-0 kubenswrapper[7476]: I0320 08:49:03.398075 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.398354 master-0 kubenswrapper[7476]: I0320 08:49:03.398321 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.398746 master-0 kubenswrapper[7476]: I0320 08:49:03.398679 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.399022 master-0 kubenswrapper[7476]: I0320 08:49:03.398999 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.400054 master-0 kubenswrapper[7476]: I0320 08:49:03.400035 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.400482 master-0 kubenswrapper[7476]: I0320 08:49:03.400443 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.413458 master-0 kubenswrapper[7476]: I0320 08:49:03.413414 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.416075 master-0 kubenswrapper[7476]: I0320 08:49:03.416019 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.466425 master-0 kubenswrapper[7476]: I0320 08:49:03.466371 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:03.493182 master-0 kubenswrapper[7476]: I0320 08:49:03.493112 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:03.950249 master-0 kubenswrapper[7476]: I0320 08:49:03.950180 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv"] Mar 20 08:49:03.951546 master-0 kubenswrapper[7476]: W0320 08:49:03.951494 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod240ba61a_e439_4f94_b9b3_7903b9b1bc05.slice/crio-a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810 WatchSource:0}: Error finding container a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810: Status 404 returned error can't find the container with id a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810 Mar 20 08:49:03.953026 master-0 kubenswrapper[7476]: I0320 08:49:03.952975 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-bc85986b9-8p79x"] Mar 20 08:49:04.016874 master-0 kubenswrapper[7476]: I0320 08:49:04.016808 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:04.016874 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:04.016874 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:04.016874 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:04.022644 master-0 kubenswrapper[7476]: I0320 08:49:04.016895 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:04.381525 master-0 kubenswrapper[7476]: I0320 08:49:04.381450 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" event={"ID":"41ac891d-b41d-43c4-be46-35f39671477a","Type":"ContainerStarted","Data":"9bbc62f41eb9cddabece6ee46b25a672dc565f68843a8cfb4ee6a9d70bc8ddf1"} Mar 20 08:49:04.381525 master-0 kubenswrapper[7476]: I0320 08:49:04.381513 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" event={"ID":"41ac891d-b41d-43c4-be46-35f39671477a","Type":"ContainerStarted","Data":"b7b1e72d13c6e7c1a14867c5547562b82b9b40ac636f0328d795dcff8a14b2b8"} Mar 20 08:49:04.381902 master-0 kubenswrapper[7476]: I0320 08:49:04.381684 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:04.382744 master-0 kubenswrapper[7476]: I0320 08:49:04.382696 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" event={"ID":"240ba61a-e439-4f94-b9b3-7903b9b1bc05","Type":"ContainerStarted","Data":"03e9b975d965f8ec377b68d24d75b91897161659c0e305cafe8e9368b9999d09"} Mar 20 08:49:04.382827 master-0 kubenswrapper[7476]: I0320 08:49:04.382749 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" event={"ID":"240ba61a-e439-4f94-b9b3-7903b9b1bc05","Type":"ContainerStarted","Data":"a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810"} Mar 20 08:49:04.383001 master-0 kubenswrapper[7476]: I0320 08:49:04.382968 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:04.388676 master-0 kubenswrapper[7476]: I0320 08:49:04.388628 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:49:04.411684 master-0 kubenswrapper[7476]: I0320 08:49:04.411616 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" podStartSLOduration=1.411598058 podStartE2EDuration="1.411598058s" podCreationTimestamp="2026-03-20 08:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:04.411081482 +0000 UTC m=+825.379850028" watchObservedRunningTime="2026-03-20 08:49:04.411598058 +0000 UTC m=+825.380366594" Mar 20 08:49:04.878202 master-0 kubenswrapper[7476]: I0320 08:49:04.878115 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" podStartSLOduration=1.878095547 podStartE2EDuration="1.878095547s" podCreationTimestamp="2026-03-20 08:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:04.74304528 +0000 UTC m=+825.711813826" watchObservedRunningTime="2026-03-20 08:49:04.878095547 +0000 UTC m=+825.846864073" Mar 20 08:49:04.880279 master-0 kubenswrapper[7476]: I0320 08:49:04.880215 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 20 08:49:04.881206 master-0 kubenswrapper[7476]: I0320 08:49:04.881176 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:04.884579 master-0 kubenswrapper[7476]: I0320 08:49:04.884537 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9xqm8" Mar 20 08:49:04.884757 master-0 kubenswrapper[7476]: I0320 08:49:04.884747 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 08:49:04.942789 master-0 kubenswrapper[7476]: I0320 08:49:04.942711 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 20 08:49:05.006305 master-0 kubenswrapper[7476]: I0320 08:49:05.006022 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:49:05.016236 master-0 kubenswrapper[7476]: I0320 08:49:05.016166 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:05.016236 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:05.016236 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:05.016236 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:05.016505 master-0 kubenswrapper[7476]: I0320 08:49:05.016281 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:05.016864 master-0 kubenswrapper[7476]: I0320 08:49:05.016820 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.017143 master-0 kubenswrapper[7476]: I0320 08:49:05.017104 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.017468 master-0 kubenswrapper[7476]: I0320 08:49:05.017164 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.128346 master-0 kubenswrapper[7476]: I0320 08:49:05.127272 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.128346 master-0 kubenswrapper[7476]: I0320 08:49:05.127316 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.128346 master-0 kubenswrapper[7476]: I0320 08:49:05.127355 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.128346 master-0 kubenswrapper[7476]: I0320 08:49:05.127430 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.128346 master-0 kubenswrapper[7476]: I0320 08:49:05.127466 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.170283 master-0 kubenswrapper[7476]: I0320 08:49:05.169973 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.199590 master-0 kubenswrapper[7476]: I0320 08:49:05.199513 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:05.241186 master-0 kubenswrapper[7476]: I0320 08:49:05.241055 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:49:05.241386 master-0 kubenswrapper[7476]: E0320 08:49:05.241281 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:49:05.601691 master-0 kubenswrapper[7476]: I0320 08:49:05.601628 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 20 08:49:05.612967 master-0 kubenswrapper[7476]: W0320 08:49:05.612912 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9775cc27_53b9_4d21_a98b_84b39ada32ee.slice/crio-3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3 WatchSource:0}: Error finding container 3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3: Status 404 returned error can't find the container with id 3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3 Mar 20 08:49:06.016875 master-0 kubenswrapper[7476]: I0320 08:49:06.016216 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:06.016875 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:06.016875 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:06.016875 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:06.016875 master-0 kubenswrapper[7476]: I0320 08:49:06.016350 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:06.398021 master-0 kubenswrapper[7476]: I0320 08:49:06.397940 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9775cc27-53b9-4d21-a98b-84b39ada32ee","Type":"ContainerStarted","Data":"8b5711cce3fb17d8c5298b374ea763f137a6631ab7f8f0ff687f48b345639df0"} Mar 20 08:49:06.398021 master-0 kubenswrapper[7476]: I0320 08:49:06.397997 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9775cc27-53b9-4d21-a98b-84b39ada32ee","Type":"ContainerStarted","Data":"3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3"} Mar 20 08:49:06.419776 master-0 kubenswrapper[7476]: I0320 08:49:06.419694 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.419671341 podStartE2EDuration="2.419671341s" podCreationTimestamp="2026-03-20 08:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:06.414768886 +0000 UTC m=+827.383537452" watchObservedRunningTime="2026-03-20 08:49:06.419671341 +0000 UTC m=+827.388439867" Mar 20 08:49:07.016731 master-0 kubenswrapper[7476]: I0320 08:49:07.016632 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:07.016731 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:07.016731 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:07.016731 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:07.017326 master-0 kubenswrapper[7476]: I0320 08:49:07.016764 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:07.036773 master-0 kubenswrapper[7476]: I0320 08:49:07.036716 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 20 08:49:07.038395 master-0 kubenswrapper[7476]: I0320 08:49:07.038361 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.041845 master-0 kubenswrapper[7476]: I0320 08:49:07.041813 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-6mj58" Mar 20 08:49:07.042612 master-0 kubenswrapper[7476]: I0320 08:49:07.042549 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 20 08:49:07.058414 master-0 kubenswrapper[7476]: I0320 08:49:07.058352 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 20 08:49:07.063540 master-0 kubenswrapper[7476]: I0320 08:49:07.063491 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.063711 master-0 kubenswrapper[7476]: I0320 08:49:07.063620 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/521086da-d513-4475-8db5-098ab9838df1-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.063711 master-0 kubenswrapper[7476]: I0320 08:49:07.063665 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.164773 master-0 kubenswrapper[7476]: I0320 08:49:07.164724 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/521086da-d513-4475-8db5-098ab9838df1-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.165079 master-0 kubenswrapper[7476]: I0320 08:49:07.165058 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.165328 master-0 kubenswrapper[7476]: I0320 08:49:07.165307 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.165505 master-0 kubenswrapper[7476]: I0320 08:49:07.165130 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-var-lock\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.165643 master-0 kubenswrapper[7476]: I0320 08:49:07.165347 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-kubelet-dir\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.195304 master-0 kubenswrapper[7476]: I0320 08:49:07.195195 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/521086da-d513-4475-8db5-098ab9838df1-kube-api-access\") pod \"installer-4-retry-1-master-0\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.363894 master-0 kubenswrapper[7476]: I0320 08:49:07.363799 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:07.828541 master-0 kubenswrapper[7476]: I0320 08:49:07.828422 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-retry-1-master-0"] Mar 20 08:49:08.016410 master-0 kubenswrapper[7476]: I0320 08:49:08.016359 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:08.016410 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:08.016410 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:08.016410 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:08.016660 master-0 kubenswrapper[7476]: I0320 08:49:08.016436 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:08.412823 master-0 kubenswrapper[7476]: I0320 08:49:08.412682 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"521086da-d513-4475-8db5-098ab9838df1","Type":"ContainerStarted","Data":"35c674a122271104b677e9d9fd6224e868e82108125b554a6b281e82916a6b0b"} Mar 20 08:49:08.412823 master-0 kubenswrapper[7476]: I0320 08:49:08.412732 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"521086da-d513-4475-8db5-098ab9838df1","Type":"ContainerStarted","Data":"dafa7bfa1891cfd7726eb94b085308d784cb5068654283dc7ca015d37e624b07"} Mar 20 08:49:08.429482 master-0 kubenswrapper[7476]: I0320 08:49:08.429407 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" podStartSLOduration=1.429393422 podStartE2EDuration="1.429393422s" podCreationTimestamp="2026-03-20 08:49:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:08.428000803 +0000 UTC m=+829.396769349" watchObservedRunningTime="2026-03-20 08:49:08.429393422 +0000 UTC m=+829.398161948" Mar 20 08:49:08.664232 master-0 kubenswrapper[7476]: I0320 08:49:08.664118 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 20 08:49:08.664865 master-0 kubenswrapper[7476]: I0320 08:49:08.664842 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.667050 master-0 kubenswrapper[7476]: I0320 08:49:08.667012 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-r4xv4" Mar 20 08:49:08.667395 master-0 kubenswrapper[7476]: I0320 08:49:08.667348 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 08:49:08.678940 master-0 kubenswrapper[7476]: I0320 08:49:08.678895 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 20 08:49:08.693371 master-0 kubenswrapper[7476]: I0320 08:49:08.691591 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.693371 master-0 kubenswrapper[7476]: I0320 08:49:08.691659 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.693371 master-0 kubenswrapper[7476]: I0320 08:49:08.691704 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.793238 master-0 kubenswrapper[7476]: I0320 08:49:08.793147 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.793544 master-0 kubenswrapper[7476]: I0320 08:49:08.793283 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.793544 master-0 kubenswrapper[7476]: I0320 08:49:08.793381 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.793544 master-0 kubenswrapper[7476]: I0320 08:49:08.793485 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.793758 master-0 kubenswrapper[7476]: I0320 08:49:08.793436 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.810047 master-0 kubenswrapper[7476]: I0320 08:49:08.809974 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:08.984125 master-0 kubenswrapper[7476]: I0320 08:49:08.983916 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:09.017019 master-0 kubenswrapper[7476]: I0320 08:49:09.016963 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:09.017019 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:09.017019 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:09.017019 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:09.017346 master-0 kubenswrapper[7476]: I0320 08:49:09.017051 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:09.497700 master-0 kubenswrapper[7476]: I0320 08:49:09.497646 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-retry-1-master-0"] Mar 20 08:49:10.016113 master-0 kubenswrapper[7476]: I0320 08:49:10.016038 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:10.016113 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:10.016113 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:10.016113 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:10.016113 master-0 kubenswrapper[7476]: I0320 08:49:10.016115 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:10.437525 master-0 kubenswrapper[7476]: I0320 08:49:10.437425 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"75cef5aa-93e6-4b8b-9ab1-06809e85883a","Type":"ContainerStarted","Data":"dc68fd475ff9f6055eceb076d1b60266600d047f4d29a9bd68c9771cc87efbc5"} Mar 20 08:49:10.437525 master-0 kubenswrapper[7476]: I0320 08:49:10.437530 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"75cef5aa-93e6-4b8b-9ab1-06809e85883a","Type":"ContainerStarted","Data":"1f303ba8c534fdd01d1d1d736d392f617339c8123f70b84cbefb43516aed9bd0"} Mar 20 08:49:10.461407 master-0 kubenswrapper[7476]: I0320 08:49:10.461180 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" podStartSLOduration=2.461151335 podStartE2EDuration="2.461151335s" podCreationTimestamp="2026-03-20 08:49:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:10.455416906 +0000 UTC m=+831.424185482" watchObservedRunningTime="2026-03-20 08:49:10.461151335 +0000 UTC m=+831.429919901" Mar 20 08:49:11.018538 master-0 kubenswrapper[7476]: I0320 08:49:11.018450 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:11.018538 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:11.018538 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:11.018538 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:11.018538 master-0 kubenswrapper[7476]: I0320 08:49:11.018513 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:12.016225 master-0 kubenswrapper[7476]: I0320 08:49:12.016104 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:12.016225 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:12.016225 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:12.016225 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:12.016717 master-0 kubenswrapper[7476]: I0320 08:49:12.016291 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:13.017475 master-0 kubenswrapper[7476]: I0320 08:49:13.017395 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:13.017475 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:13.017475 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:13.017475 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:13.018655 master-0 kubenswrapper[7476]: I0320 08:49:13.017524 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:14.018181 master-0 kubenswrapper[7476]: I0320 08:49:14.018007 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:14.018181 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:14.018181 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:14.018181 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:14.018181 master-0 kubenswrapper[7476]: I0320 08:49:14.018097 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:15.017848 master-0 kubenswrapper[7476]: I0320 08:49:15.017687 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:15.017848 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:15.017848 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:15.017848 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:15.017848 master-0 kubenswrapper[7476]: I0320 08:49:15.017788 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:16.017509 master-0 kubenswrapper[7476]: I0320 08:49:16.017454 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:16.017509 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:16.017509 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:16.017509 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:16.018005 master-0 kubenswrapper[7476]: I0320 08:49:16.017552 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:16.236715 master-0 kubenswrapper[7476]: I0320 08:49:16.236648 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:49:16.237618 master-0 kubenswrapper[7476]: E0320 08:49:16.236923 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:49:17.016946 master-0 kubenswrapper[7476]: I0320 08:49:17.016873 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:17.016946 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:17.016946 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:17.016946 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:17.017326 master-0 kubenswrapper[7476]: I0320 08:49:17.016963 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:18.018058 master-0 kubenswrapper[7476]: I0320 08:49:18.017957 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:18.018058 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:18.018058 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:18.018058 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:18.019254 master-0 kubenswrapper[7476]: I0320 08:49:18.018076 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:19.017078 master-0 kubenswrapper[7476]: I0320 08:49:19.016948 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:19.017078 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:19.017078 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:19.017078 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:19.017078 master-0 kubenswrapper[7476]: I0320 08:49:19.017049 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:20.016616 master-0 kubenswrapper[7476]: I0320 08:49:20.016373 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:20.016616 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:20.016616 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:20.016616 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:20.016616 master-0 kubenswrapper[7476]: I0320 08:49:20.016549 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:21.017232 master-0 kubenswrapper[7476]: I0320 08:49:21.017179 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:21.017232 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:21.017232 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:21.017232 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:21.017817 master-0 kubenswrapper[7476]: I0320 08:49:21.017250 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:22.018243 master-0 kubenswrapper[7476]: I0320 08:49:22.018121 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:22.018243 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:22.018243 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:22.018243 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:22.018243 master-0 kubenswrapper[7476]: I0320 08:49:22.018212 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:23.016498 master-0 kubenswrapper[7476]: I0320 08:49:23.016436 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:23.016498 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:23.016498 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:23.016498 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:23.016798 master-0 kubenswrapper[7476]: I0320 08:49:23.016515 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:24.017003 master-0 kubenswrapper[7476]: I0320 08:49:24.016948 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:24.017003 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:24.017003 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:24.017003 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:24.017523 master-0 kubenswrapper[7476]: I0320 08:49:24.017013 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:25.017788 master-0 kubenswrapper[7476]: I0320 08:49:25.017704 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:25.017788 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:25.017788 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:25.017788 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:25.019082 master-0 kubenswrapper[7476]: I0320 08:49:25.017825 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:26.017151 master-0 kubenswrapper[7476]: I0320 08:49:26.017070 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:26.017151 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:26.017151 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:26.017151 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:26.017624 master-0 kubenswrapper[7476]: I0320 08:49:26.017163 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:27.017598 master-0 kubenswrapper[7476]: I0320 08:49:27.017477 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:27.017598 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:27.017598 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:27.017598 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:27.018690 master-0 kubenswrapper[7476]: I0320 08:49:27.017594 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:27.708878 master-0 kubenswrapper[7476]: I0320 08:49:27.708820 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xj8x6"] Mar 20 08:49:27.709682 master-0 kubenswrapper[7476]: I0320 08:49:27.709607 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.711470 master-0 kubenswrapper[7476]: I0320 08:49:27.711423 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-zds7w" Mar 20 08:49:27.711753 master-0 kubenswrapper[7476]: I0320 08:49:27.711716 7476 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 20 08:49:27.796177 master-0 kubenswrapper[7476]: I0320 08:49:27.796131 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.796463 master-0 kubenswrapper[7476]: I0320 08:49:27.796445 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcbj\" (UniqueName: \"kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.796575 master-0 kubenswrapper[7476]: I0320 08:49:27.796562 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.796673 master-0 kubenswrapper[7476]: I0320 08:49:27.796660 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45b3c788-eb83-448a-bc60-90b8ace28382-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.898398 master-0 kubenswrapper[7476]: I0320 08:49:27.898326 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45b3c788-eb83-448a-bc60-90b8ace28382-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.898605 master-0 kubenswrapper[7476]: I0320 08:49:27.898564 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/45b3c788-eb83-448a-bc60-90b8ace28382-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.898689 master-0 kubenswrapper[7476]: I0320 08:49:27.898659 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.898738 master-0 kubenswrapper[7476]: I0320 08:49:27.898723 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcbj\" (UniqueName: \"kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.898849 master-0 kubenswrapper[7476]: I0320 08:49:27.898824 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.899745 master-0 kubenswrapper[7476]: I0320 08:49:27.899717 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.900026 master-0 kubenswrapper[7476]: I0320 08:49:27.900000 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:27.924301 master-0 kubenswrapper[7476]: I0320 08:49:27.919999 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcbj\" (UniqueName: \"kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj\") pod \"cni-sysctl-allowlist-ds-xj8x6\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:28.016223 master-0 kubenswrapper[7476]: I0320 08:49:28.016109 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:28.016223 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:28.016223 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:28.016223 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:28.016223 master-0 kubenswrapper[7476]: I0320 08:49:28.016160 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:28.031734 master-0 kubenswrapper[7476]: I0320 08:49:28.031700 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:28.048859 master-0 kubenswrapper[7476]: W0320 08:49:28.048806 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45b3c788_eb83_448a_bc60_90b8ace28382.slice/crio-86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6 WatchSource:0}: Error finding container 86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6: Status 404 returned error can't find the container with id 86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6 Mar 20 08:49:28.578424 master-0 kubenswrapper[7476]: I0320 08:49:28.578231 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" event={"ID":"45b3c788-eb83-448a-bc60-90b8ace28382","Type":"ContainerStarted","Data":"4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e"} Mar 20 08:49:28.578424 master-0 kubenswrapper[7476]: I0320 08:49:28.578342 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" event={"ID":"45b3c788-eb83-448a-bc60-90b8ace28382","Type":"ContainerStarted","Data":"86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6"} Mar 20 08:49:28.578731 master-0 kubenswrapper[7476]: I0320 08:49:28.578519 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:29.017818 master-0 kubenswrapper[7476]: I0320 08:49:29.017757 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:29.017818 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:29.017818 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:29.017818 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:29.018097 master-0 kubenswrapper[7476]: I0320 08:49:29.017836 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:29.256564 master-0 kubenswrapper[7476]: I0320 08:49:29.256512 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:49:29.257241 master-0 kubenswrapper[7476]: E0320 08:49:29.256799 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:49:29.602585 master-0 kubenswrapper[7476]: I0320 08:49:29.602547 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:49:29.624274 master-0 kubenswrapper[7476]: I0320 08:49:29.624184 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" podStartSLOduration=2.624166043 podStartE2EDuration="2.624166043s" podCreationTimestamp="2026-03-20 08:49:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:28.600232002 +0000 UTC m=+849.569000528" watchObservedRunningTime="2026-03-20 08:49:29.624166043 +0000 UTC m=+850.592934579" Mar 20 08:49:29.702240 master-0 kubenswrapper[7476]: I0320 08:49:29.702181 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xj8x6"] Mar 20 08:49:30.016287 master-0 kubenswrapper[7476]: I0320 08:49:30.016136 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:30.016287 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:30.016287 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:30.016287 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:30.016683 master-0 kubenswrapper[7476]: I0320 08:49:30.016644 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:31.017100 master-0 kubenswrapper[7476]: I0320 08:49:31.017043 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:31.017100 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:31.017100 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:31.017100 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:31.017986 master-0 kubenswrapper[7476]: I0320 08:49:31.017953 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:31.601528 master-0 kubenswrapper[7476]: I0320 08:49:31.601460 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" gracePeriod=30 Mar 20 08:49:32.018099 master-0 kubenswrapper[7476]: I0320 08:49:32.017473 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:32.018099 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:32.018099 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:32.018099 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:32.018099 master-0 kubenswrapper[7476]: I0320 08:49:32.017565 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:33.017349 master-0 kubenswrapper[7476]: I0320 08:49:33.017298 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:33.017349 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:33.017349 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:33.017349 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:33.017669 master-0 kubenswrapper[7476]: I0320 08:49:33.017394 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:34.017680 master-0 kubenswrapper[7476]: I0320 08:49:34.017592 7476 patch_prober.go:28] interesting pod/router-default-7dcf5569b5-kvmtp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 08:49:34.017680 master-0 kubenswrapper[7476]: [-]has-synced failed: reason withheld Mar 20 08:49:34.017680 master-0 kubenswrapper[7476]: [+]process-running ok Mar 20 08:49:34.017680 master-0 kubenswrapper[7476]: healthz check failed Mar 20 08:49:34.018802 master-0 kubenswrapper[7476]: I0320 08:49:34.017764 7476 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 08:49:34.018802 master-0 kubenswrapper[7476]: I0320 08:49:34.017860 7476 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:49:34.019067 master-0 kubenswrapper[7476]: I0320 08:49:34.018946 7476 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c"} pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" containerMessage="Container router failed startup probe, will be restarted" Mar 20 08:49:34.019067 master-0 kubenswrapper[7476]: I0320 08:49:34.019041 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" podUID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerName="router" containerID="cri-o://d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c" gracePeriod=3600 Mar 20 08:49:37.280195 master-0 kubenswrapper[7476]: I0320 08:49:37.280114 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd"] Mar 20 08:49:37.281932 master-0 kubenswrapper[7476]: I0320 08:49:37.281887 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.285555 master-0 kubenswrapper[7476]: I0320 08:49:37.285501 7476 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-p2lrx" Mar 20 08:49:37.296578 master-0 kubenswrapper[7476]: I0320 08:49:37.296525 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd"] Mar 20 08:49:37.407703 master-0 kubenswrapper[7476]: I0320 08:49:37.407612 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zf6h\" (UniqueName: \"kubernetes.io/projected/a88b1c81-02b5-4c85-9660-5f84c900a946-kube-api-access-5zf6h\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.408381 master-0 kubenswrapper[7476]: I0320 08:49:37.408329 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.510236 master-0 kubenswrapper[7476]: I0320 08:49:37.510149 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.510528 master-0 kubenswrapper[7476]: I0320 08:49:37.510366 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zf6h\" (UniqueName: \"kubernetes.io/projected/a88b1c81-02b5-4c85-9660-5f84c900a946-kube-api-access-5zf6h\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.515080 master-0 kubenswrapper[7476]: I0320 08:49:37.514997 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.531060 master-0 kubenswrapper[7476]: I0320 08:49:37.530908 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zf6h\" (UniqueName: \"kubernetes.io/projected/a88b1c81-02b5-4c85-9660-5f84c900a946-kube-api-access-5zf6h\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:37.603349 master-0 kubenswrapper[7476]: I0320 08:49:37.603246 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:49:38.036036 master-0 kubenswrapper[7476]: E0320 08:49:38.035937 7476 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 08:49:38.038022 master-0 kubenswrapper[7476]: E0320 08:49:38.037961 7476 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 08:49:38.044298 master-0 kubenswrapper[7476]: E0320 08:49:38.044180 7476 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 08:49:38.044447 master-0 kubenswrapper[7476]: E0320 08:49:38.044315 7476 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" containerName="kube-multus-additional-cni-plugins" Mar 20 08:49:38.103940 master-0 kubenswrapper[7476]: I0320 08:49:38.103826 7476 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd"] Mar 20 08:49:38.114338 master-0 kubenswrapper[7476]: W0320 08:49:38.114229 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda88b1c81_02b5_4c85_9660_5f84c900a946.slice/crio-08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d WatchSource:0}: Error finding container 08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d: Status 404 returned error can't find the container with id 08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d Mar 20 08:49:38.735608 master-0 kubenswrapper[7476]: I0320 08:49:38.735558 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" event={"ID":"a88b1c81-02b5-4c85-9660-5f84c900a946","Type":"ContainerStarted","Data":"35e3b72d6f100be23bd1145e2e51a31d89227d5b6bd38863566a6aae0e8bb2e8"} Mar 20 08:49:38.736339 master-0 kubenswrapper[7476]: I0320 08:49:38.736309 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" event={"ID":"a88b1c81-02b5-4c85-9660-5f84c900a946","Type":"ContainerStarted","Data":"cc3aaa60f67e217ef3d18081141f0651595ce0154087c67a825630ce7bdd66f3"} Mar 20 08:49:38.736596 master-0 kubenswrapper[7476]: I0320 08:49:38.736512 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" event={"ID":"a88b1c81-02b5-4c85-9660-5f84c900a946","Type":"ContainerStarted","Data":"08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d"} Mar 20 08:49:38.764117 master-0 kubenswrapper[7476]: I0320 08:49:38.763969 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" podStartSLOduration=1.763945645 podStartE2EDuration="1.763945645s" podCreationTimestamp="2026-03-20 08:49:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:38.761721773 +0000 UTC m=+859.730490319" watchObservedRunningTime="2026-03-20 08:49:38.763945645 +0000 UTC m=+859.732714211" Mar 20 08:49:38.802608 master-0 kubenswrapper[7476]: I0320 08:49:38.802078 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf"] Mar 20 08:49:38.803216 master-0 kubenswrapper[7476]: I0320 08:49:38.803183 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="multus-admission-controller" containerID="cri-o://46a769eaa885d6f2aee7986a052f5cb914f5503a0051214e8b4e113fe0f1651a" gracePeriod=30 Mar 20 08:49:38.803411 master-0 kubenswrapper[7476]: I0320 08:49:38.803281 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="kube-rbac-proxy" containerID="cri-o://c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9" gracePeriod=30 Mar 20 08:49:39.325107 master-0 kubenswrapper[7476]: E0320 08:49:39.324992 7476 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-scheduler-pod.yaml\": /etc/kubernetes/manifests/kube-scheduler-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 20 08:49:39.325299 master-0 kubenswrapper[7476]: I0320 08:49:39.325209 7476 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 20 08:49:39.325464 master-0 kubenswrapper[7476]: I0320 08:49:39.325426 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" containerID="cri-o://44e6488658001ec197750deb888ad4cc53ef741359268344dae6149df1e9b900" gracePeriod=30 Mar 20 08:49:39.326395 master-0 kubenswrapper[7476]: I0320 08:49:39.326365 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 20 08:49:39.326675 master-0 kubenswrapper[7476]: E0320 08:49:39.326649 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.326675 master-0 kubenswrapper[7476]: I0320 08:49:39.326670 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.326746 master-0 kubenswrapper[7476]: E0320 08:49:39.326683 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.326746 master-0 kubenswrapper[7476]: I0320 08:49:39.326692 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.326903 master-0 kubenswrapper[7476]: I0320 08:49:39.326877 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.326939 master-0 kubenswrapper[7476]: I0320 08:49:39.326918 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.327101 master-0 kubenswrapper[7476]: E0320 08:49:39.327076 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.327101 master-0 kubenswrapper[7476]: I0320 08:49:39.327095 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.327305 master-0 kubenswrapper[7476]: I0320 08:49:39.327246 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:49:39.328232 master-0 kubenswrapper[7476]: I0320 08:49:39.328203 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.439975 master-0 kubenswrapper[7476]: I0320 08:49:39.439910 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.440237 master-0 kubenswrapper[7476]: I0320 08:49:39.440076 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.469833 master-0 kubenswrapper[7476]: I0320 08:49:39.469770 7476 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 20 08:49:39.539347 master-0 kubenswrapper[7476]: I0320 08:49:39.536209 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:49:39.542414 master-0 kubenswrapper[7476]: I0320 08:49:39.541335 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.542414 master-0 kubenswrapper[7476]: I0320 08:49:39.541451 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.542414 master-0 kubenswrapper[7476]: I0320 08:49:39.541506 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.542414 master-0 kubenswrapper[7476]: I0320 08:49:39.541456 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.558206 master-0 kubenswrapper[7476]: I0320 08:49:39.558129 7476 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="3bf0d0f0-455e-43b1-968e-8fb2d3edce7b" Mar 20 08:49:39.642928 master-0 kubenswrapper[7476]: I0320 08:49:39.642855 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 20 08:49:39.642928 master-0 kubenswrapper[7476]: I0320 08:49:39.642926 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") pod \"c83737980b9ee109184b1d78e942cf36\" (UID: \"c83737980b9ee109184b1d78e942cf36\") " Mar 20 08:49:39.644844 master-0 kubenswrapper[7476]: I0320 08:49:39.643014 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs" (OuterVolumeSpecName: "logs") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:39.644844 master-0 kubenswrapper[7476]: I0320 08:49:39.643138 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets" (OuterVolumeSpecName: "secrets") pod "c83737980b9ee109184b1d78e942cf36" (UID: "c83737980b9ee109184b1d78e942cf36"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:39.644844 master-0 kubenswrapper[7476]: I0320 08:49:39.643598 7476 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-logs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:39.644844 master-0 kubenswrapper[7476]: I0320 08:49:39.643625 7476 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/c83737980b9ee109184b1d78e942cf36-secrets\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:39.749855 master-0 kubenswrapper[7476]: I0320 08:49:39.749793 7476 generic.go:334] "Generic (PLEG): container finished" podID="74bebf0b-6727-4959-8239-a9389e630524" containerID="c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9" exitCode=0 Mar 20 08:49:39.750531 master-0 kubenswrapper[7476]: I0320 08:49:39.749905 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerDied","Data":"c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9"} Mar 20 08:49:39.752136 master-0 kubenswrapper[7476]: I0320 08:49:39.752095 7476 generic.go:334] "Generic (PLEG): container finished" podID="521086da-d513-4475-8db5-098ab9838df1" containerID="35c674a122271104b677e9d9fd6224e868e82108125b554a6b281e82916a6b0b" exitCode=0 Mar 20 08:49:39.752256 master-0 kubenswrapper[7476]: I0320 08:49:39.752199 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"521086da-d513-4475-8db5-098ab9838df1","Type":"ContainerDied","Data":"35c674a122271104b677e9d9fd6224e868e82108125b554a6b281e82916a6b0b"} Mar 20 08:49:39.756890 master-0 kubenswrapper[7476]: I0320 08:49:39.755921 7476 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="44e6488658001ec197750deb888ad4cc53ef741359268344dae6149df1e9b900" exitCode=0 Mar 20 08:49:39.756890 master-0 kubenswrapper[7476]: I0320 08:49:39.756020 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 20 08:49:39.756890 master-0 kubenswrapper[7476]: I0320 08:49:39.756036 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332" Mar 20 08:49:39.756890 master-0 kubenswrapper[7476]: I0320 08:49:39.756069 7476 scope.go:117] "RemoveContainer" containerID="2af5fddc5d2a375dc416488e9df9292dbf88621bcffa837acf0f758641cfece0" Mar 20 08:49:39.759896 master-0 kubenswrapper[7476]: I0320 08:49:39.759459 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:39.801495 master-0 kubenswrapper[7476]: W0320 08:49:39.801436 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8413125cf444e5c95f023c5dd9c6151e.slice/crio-9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5 WatchSource:0}: Error finding container 9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5: Status 404 returned error can't find the container with id 9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5 Mar 20 08:49:40.237074 master-0 kubenswrapper[7476]: I0320 08:49:40.236911 7476 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:49:40.237296 master-0 kubenswrapper[7476]: E0320 08:49:40.237136 7476 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=ingress-operator pod=ingress-operator-66b84d69b-dknxr_openshift-ingress-operator(22f85e98-eb36-46b2-ab5d-7c21e060cba5)\"" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" podUID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" Mar 20 08:49:40.764155 master-0 kubenswrapper[7476]: I0320 08:49:40.764102 7476 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="977167918f7e6bd33389cf095bf0a1f6441c8367a8bb9ad4ad8439f4003209b0" exitCode=0 Mar 20 08:49:40.765014 master-0 kubenswrapper[7476]: I0320 08:49:40.764195 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"977167918f7e6bd33389cf095bf0a1f6441c8367a8bb9ad4ad8439f4003209b0"} Mar 20 08:49:40.765014 master-0 kubenswrapper[7476]: I0320 08:49:40.764521 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5"} Mar 20 08:49:41.122989 master-0 kubenswrapper[7476]: I0320 08:49:41.122929 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:41.248234 master-0 kubenswrapper[7476]: I0320 08:49:41.248178 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c83737980b9ee109184b1d78e942cf36" path="/var/lib/kubelet/pods/c83737980b9ee109184b1d78e942cf36/volumes" Mar 20 08:49:41.248578 master-0 kubenswrapper[7476]: I0320 08:49:41.248547 7476 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 20 08:49:41.269857 master-0 kubenswrapper[7476]: I0320 08:49:41.269741 7476 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 20 08:49:41.269857 master-0 kubenswrapper[7476]: I0320 08:49:41.269778 7476 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="3bf0d0f0-455e-43b1-968e-8fb2d3edce7b" Mar 20 08:49:41.272136 master-0 kubenswrapper[7476]: I0320 08:49:41.272074 7476 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 20 08:49:41.272207 master-0 kubenswrapper[7476]: I0320 08:49:41.272136 7476 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="3bf0d0f0-455e-43b1-968e-8fb2d3edce7b" Mar 20 08:49:41.297183 master-0 kubenswrapper[7476]: I0320 08:49:41.297101 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/521086da-d513-4475-8db5-098ab9838df1-kube-api-access\") pod \"521086da-d513-4475-8db5-098ab9838df1\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " Mar 20 08:49:41.297404 master-0 kubenswrapper[7476]: I0320 08:49:41.297204 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-var-lock\") pod \"521086da-d513-4475-8db5-098ab9838df1\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " Mar 20 08:49:41.297404 master-0 kubenswrapper[7476]: I0320 08:49:41.297250 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-kubelet-dir\") pod \"521086da-d513-4475-8db5-098ab9838df1\" (UID: \"521086da-d513-4475-8db5-098ab9838df1\") " Mar 20 08:49:41.297404 master-0 kubenswrapper[7476]: I0320 08:49:41.297386 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "521086da-d513-4475-8db5-098ab9838df1" (UID: "521086da-d513-4475-8db5-098ab9838df1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:41.297815 master-0 kubenswrapper[7476]: I0320 08:49:41.297340 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-var-lock" (OuterVolumeSpecName: "var-lock") pod "521086da-d513-4475-8db5-098ab9838df1" (UID: "521086da-d513-4475-8db5-098ab9838df1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:41.298115 master-0 kubenswrapper[7476]: I0320 08:49:41.298048 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:41.298115 master-0 kubenswrapper[7476]: I0320 08:49:41.298094 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/521086da-d513-4475-8db5-098ab9838df1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:41.300052 master-0 kubenswrapper[7476]: I0320 08:49:41.299983 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/521086da-d513-4475-8db5-098ab9838df1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "521086da-d513-4475-8db5-098ab9838df1" (UID: "521086da-d513-4475-8db5-098ab9838df1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:41.399029 master-0 kubenswrapper[7476]: I0320 08:49:41.398959 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/521086da-d513-4475-8db5-098ab9838df1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:41.783684 master-0 kubenswrapper[7476]: I0320 08:49:41.783601 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"521086da-d513-4475-8db5-098ab9838df1","Type":"ContainerDied","Data":"dafa7bfa1891cfd7726eb94b085308d784cb5068654283dc7ca015d37e624b07"} Mar 20 08:49:41.783684 master-0 kubenswrapper[7476]: I0320 08:49:41.783657 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:49:41.783684 master-0 kubenswrapper[7476]: I0320 08:49:41.783681 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafa7bfa1891cfd7726eb94b085308d784cb5068654283dc7ca015d37e624b07" Mar 20 08:49:41.791363 master-0 kubenswrapper[7476]: I0320 08:49:41.791259 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"294d5c130b0b65fdd6ffb533a9f65b52d295ccc3a6eab6b7ca1618e56519b844"} Mar 20 08:49:41.791460 master-0 kubenswrapper[7476]: I0320 08:49:41.791392 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"801adefeaa867d4ddecc5aa6ca06902111266589a16c1a6d41af9de695634c0f"} Mar 20 08:49:41.791460 master-0 kubenswrapper[7476]: I0320 08:49:41.791418 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"a041326266cad8376feb00367e376ef0928972722fd2a38761524556e9a05575"} Mar 20 08:49:41.792748 master-0 kubenswrapper[7476]: I0320 08:49:41.792703 7476 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:49:42.724137 master-0 kubenswrapper[7476]: I0320 08:49:42.724028 7476 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:49:42.724737 master-0 kubenswrapper[7476]: I0320 08:49:42.724579 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" gracePeriod=30 Mar 20 08:49:42.724853 master-0 kubenswrapper[7476]: I0320 08:49:42.724709 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" gracePeriod=30 Mar 20 08:49:42.724853 master-0 kubenswrapper[7476]: I0320 08:49:42.724731 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" containerID="cri-o://2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" gracePeriod=30 Mar 20 08:49:42.725057 master-0 kubenswrapper[7476]: I0320 08:49:42.724903 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" containerID="cri-o://f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" gracePeriod=30 Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.725443 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.725820 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.725845 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.725874 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521086da-d513-4475-8db5-098ab9838df1" containerName="installer" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.725889 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="521086da-d513-4475-8db5-098ab9838df1" containerName="installer" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.725915 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.725927 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.725944 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.725956 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.725978 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.725991 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.726010 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-recovery-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.726023 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-recovery-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.726052 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.726070 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.726097 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.726117 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: E0320 08:49:42.726139 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.726337 master-0 kubenswrapper[7476]: I0320 08:49:42.726155 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726393 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726416 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-recovery-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726434 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726447 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726467 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726484 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726500 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="521086da-d513-4475-8db5-098ab9838df1" containerName="installer" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726518 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager-cert-syncer" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726540 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726561 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: E0320 08:49:42.726792 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726809 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="kube-controller-manager" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: E0320 08:49:42.726839 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.726851 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.729367 master-0 kubenswrapper[7476]: I0320 08:49:42.727050 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c753d068f364b16e3aeb8396b7d9f33" containerName="cluster-policy-controller" Mar 20 08:49:42.825593 master-0 kubenswrapper[7476]: I0320 08:49:42.825433 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:42.826464 master-0 kubenswrapper[7476]: I0320 08:49:42.826099 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:42.902099 master-0 kubenswrapper[7476]: I0320 08:49:42.901974 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/1.log" Mar 20 08:49:42.903118 master-0 kubenswrapper[7476]: I0320 08:49:42.903080 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:49:42.904682 master-0 kubenswrapper[7476]: I0320 08:49:42.904652 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/0.log" Mar 20 08:49:42.905344 master-0 kubenswrapper[7476]: I0320 08:49:42.905310 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:49:42.905508 master-0 kubenswrapper[7476]: I0320 08:49:42.905470 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:42.908629 master-0 kubenswrapper[7476]: I0320 08:49:42.908511 7476 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="8c753d068f364b16e3aeb8396b7d9f33" podUID="36f4a012744c6465102d09cc67ac63e6" Mar 20 08:49:42.928413 master-0 kubenswrapper[7476]: I0320 08:49:42.928115 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-cert-dir\") pod \"8c753d068f364b16e3aeb8396b7d9f33\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " Mar 20 08:49:42.928413 master-0 kubenswrapper[7476]: I0320 08:49:42.928292 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-resource-dir\") pod \"8c753d068f364b16e3aeb8396b7d9f33\" (UID: \"8c753d068f364b16e3aeb8396b7d9f33\") " Mar 20 08:49:42.928645 master-0 kubenswrapper[7476]: I0320 08:49:42.928511 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:42.928684 master-0 kubenswrapper[7476]: I0320 08:49:42.928647 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:42.928788 master-0 kubenswrapper[7476]: I0320 08:49:42.928759 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:42.928895 master-0 kubenswrapper[7476]: I0320 08:49:42.928867 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8c753d068f364b16e3aeb8396b7d9f33" (UID: "8c753d068f364b16e3aeb8396b7d9f33"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:42.928944 master-0 kubenswrapper[7476]: I0320 08:49:42.928915 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8c753d068f364b16e3aeb8396b7d9f33" (UID: "8c753d068f364b16e3aeb8396b7d9f33"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:42.928979 master-0 kubenswrapper[7476]: I0320 08:49:42.928961 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:43.030586 master-0 kubenswrapper[7476]: I0320 08:49:43.030419 7476 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:43.030586 master-0 kubenswrapper[7476]: I0320 08:49:43.030458 7476 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8c753d068f364b16e3aeb8396b7d9f33-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:43.251629 master-0 kubenswrapper[7476]: I0320 08:49:43.251489 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c753d068f364b16e3aeb8396b7d9f33" path="/var/lib/kubelet/pods/8c753d068f364b16e3aeb8396b7d9f33/volumes" Mar 20 08:49:43.796006 master-0 kubenswrapper[7476]: E0320 08:49:43.795952 7476 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 20 08:49:43.798591 master-0 kubenswrapper[7476]: I0320 08:49:43.798528 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:49:43.799955 master-0 kubenswrapper[7476]: I0320 08:49:43.799903 7476 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 20 08:49:43.800149 master-0 kubenswrapper[7476]: I0320 08:49:43.800095 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.800473 master-0 kubenswrapper[7476]: I0320 08:49:43.800367 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" containerID="cri-o://996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c" gracePeriod=15 Mar 20 08:49:43.800586 master-0 kubenswrapper[7476]: I0320 08:49:43.800416 7476 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898" gracePeriod=15 Mar 20 08:49:43.801887 master-0 kubenswrapper[7476]: I0320 08:49:43.801641 7476 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 20 08:49:43.802054 master-0 kubenswrapper[7476]: E0320 08:49:43.801983 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 20 08:49:43.802054 master-0 kubenswrapper[7476]: I0320 08:49:43.802021 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 20 08:49:43.802054 master-0 kubenswrapper[7476]: E0320 08:49:43.802045 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 20 08:49:43.802402 master-0 kubenswrapper[7476]: I0320 08:49:43.802062 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 20 08:49:43.802402 master-0 kubenswrapper[7476]: E0320 08:49:43.802096 7476 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 20 08:49:43.802402 master-0 kubenswrapper[7476]: I0320 08:49:43.802112 7476 state_mem.go:107] "Deleted CPUSet assignment" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 20 08:49:43.802665 master-0 kubenswrapper[7476]: I0320 08:49:43.802460 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver" Mar 20 08:49:43.802665 master-0 kubenswrapper[7476]: I0320 08:49:43.802497 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="setup" Mar 20 08:49:43.802665 master-0 kubenswrapper[7476]: I0320 08:49:43.802521 7476 memory_manager.go:354] "RemoveStaleState removing state" podUID="49fac1b46a11e49501805e891baae4a9" containerName="kube-apiserver-insecure-readyz" Mar 20 08:49:43.806179 master-0 kubenswrapper[7476]: I0320 08:49:43.806098 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:43.820708 master-0 kubenswrapper[7476]: I0320 08:49:43.820605 7476 generic.go:334] "Generic (PLEG): container finished" podID="75cef5aa-93e6-4b8b-9ab1-06809e85883a" containerID="dc68fd475ff9f6055eceb076d1b60266600d047f4d29a9bd68c9771cc87efbc5" exitCode=0 Mar 20 08:49:43.820940 master-0 kubenswrapper[7476]: I0320 08:49:43.820701 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"75cef5aa-93e6-4b8b-9ab1-06809e85883a","Type":"ContainerDied","Data":"dc68fd475ff9f6055eceb076d1b60266600d047f4d29a9bd68c9771cc87efbc5"} Mar 20 08:49:43.825258 master-0 kubenswrapper[7476]: I0320 08:49:43.825177 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/1.log" Mar 20 08:49:43.827005 master-0 kubenswrapper[7476]: I0320 08:49:43.826952 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/cluster-policy-controller/3.log" Mar 20 08:49:43.829593 master-0 kubenswrapper[7476]: I0320 08:49:43.829541 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager-cert-syncer/0.log" Mar 20 08:49:43.830607 master-0 kubenswrapper[7476]: I0320 08:49:43.830548 7476 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_8c753d068f364b16e3aeb8396b7d9f33/kube-controller-manager/0.log" Mar 20 08:49:43.830766 master-0 kubenswrapper[7476]: I0320 08:49:43.830656 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" exitCode=2 Mar 20 08:49:43.830766 master-0 kubenswrapper[7476]: I0320 08:49:43.830694 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" exitCode=0 Mar 20 08:49:43.830766 master-0 kubenswrapper[7476]: I0320 08:49:43.830718 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" exitCode=0 Mar 20 08:49:43.830766 master-0 kubenswrapper[7476]: I0320 08:49:43.830741 7476 generic.go:334] "Generic (PLEG): container finished" podID="8c753d068f364b16e3aeb8396b7d9f33" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" exitCode=0 Mar 20 08:49:43.831140 master-0 kubenswrapper[7476]: I0320 08:49:43.830776 7476 scope.go:117] "RemoveContainer" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" Mar 20 08:49:43.831692 master-0 kubenswrapper[7476]: I0320 08:49:43.831542 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:49:43.852412 master-0 kubenswrapper[7476]: I0320 08:49:43.852217 7476 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=4.852176055 podStartE2EDuration="4.852176055s" podCreationTimestamp="2026-03-20 08:49:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:41.827588621 +0000 UTC m=+862.796357147" watchObservedRunningTime="2026-03-20 08:49:43.852176055 +0000 UTC m=+864.820944661" Mar 20 08:49:43.852860 master-0 kubenswrapper[7476]: I0320 08:49:43.852739 7476 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="8c753d068f364b16e3aeb8396b7d9f33" podUID="36f4a012744c6465102d09cc67ac63e6" Mar 20 08:49:43.877164 master-0 kubenswrapper[7476]: I0320 08:49:43.877056 7476 status_manager.go:851] "Failed to get status for pod" podUID="8c753d068f364b16e3aeb8396b7d9f33" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:43.881463 master-0 kubenswrapper[7476]: E0320 08:49:43.881385 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.915619 master-0 kubenswrapper[7476]: E0320 08:49:43.908171 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:43.926473 master-0 kubenswrapper[7476]: I0320 08:49:43.926369 7476 scope.go:117] "RemoveContainer" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" Mar 20 08:49:43.947176 master-0 kubenswrapper[7476]: I0320 08:49:43.947123 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.947894 master-0 kubenswrapper[7476]: I0320 08:49:43.947824 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.948102 master-0 kubenswrapper[7476]: I0320 08:49:43.948058 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.948196 master-0 kubenswrapper[7476]: I0320 08:49:43.948155 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.948410 master-0 kubenswrapper[7476]: I0320 08:49:43.948355 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:43.948557 master-0 kubenswrapper[7476]: I0320 08:49:43.948484 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:43.948637 master-0 kubenswrapper[7476]: I0320 08:49:43.948602 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:43.948743 master-0 kubenswrapper[7476]: I0320 08:49:43.948709 7476 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.027233 master-0 kubenswrapper[7476]: I0320 08:49:44.027146 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:49:44.044325 master-0 kubenswrapper[7476]: I0320 08:49:44.044285 7476 scope.go:117] "RemoveContainer" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" Mar 20 08:49:44.049824 master-0 kubenswrapper[7476]: I0320 08:49:44.049705 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.049824 master-0 kubenswrapper[7476]: I0320 08:49:44.049786 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050026 master-0 kubenswrapper[7476]: I0320 08:49:44.049850 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050026 master-0 kubenswrapper[7476]: I0320 08:49:44.049860 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050026 master-0 kubenswrapper[7476]: I0320 08:49:44.049945 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050026 master-0 kubenswrapper[7476]: I0320 08:49:44.049924 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050026 master-0 kubenswrapper[7476]: I0320 08:49:44.050005 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050048 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050092 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050101 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050058 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050124 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050167 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050242 7476 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050354 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.050462 master-0 kubenswrapper[7476]: I0320 08:49:44.050417 7476 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.060950 master-0 kubenswrapper[7476]: I0320 08:49:44.060900 7476 scope.go:117] "RemoveContainer" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" Mar 20 08:49:44.082700 master-0 kubenswrapper[7476]: I0320 08:49:44.082656 7476 scope.go:117] "RemoveContainer" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:44.108441 master-0 kubenswrapper[7476]: I0320 08:49:44.108282 7476 scope.go:117] "RemoveContainer" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:49:44.129537 master-0 kubenswrapper[7476]: I0320 08:49:44.129472 7476 scope.go:117] "RemoveContainer" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" Mar 20 08:49:44.130115 master-0 kubenswrapper[7476]: E0320 08:49:44.130060 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": container with ID starting with f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6 not found: ID does not exist" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" Mar 20 08:49:44.130289 master-0 kubenswrapper[7476]: I0320 08:49:44.130121 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6"} err="failed to get container status \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": rpc error: code = NotFound desc = could not find container \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": container with ID starting with f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6 not found: ID does not exist" Mar 20 08:49:44.130289 master-0 kubenswrapper[7476]: I0320 08:49:44.130159 7476 scope.go:117] "RemoveContainer" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" Mar 20 08:49:44.130662 master-0 kubenswrapper[7476]: E0320 08:49:44.130611 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": container with ID starting with 2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c not found: ID does not exist" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" Mar 20 08:49:44.130745 master-0 kubenswrapper[7476]: I0320 08:49:44.130665 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c"} err="failed to get container status \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": rpc error: code = NotFound desc = could not find container \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": container with ID starting with 2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c not found: ID does not exist" Mar 20 08:49:44.130813 master-0 kubenswrapper[7476]: I0320 08:49:44.130745 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:49:44.131055 master-0 kubenswrapper[7476]: E0320 08:49:44.131026 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": container with ID starting with ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941 not found: ID does not exist" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:49:44.131181 master-0 kubenswrapper[7476]: I0320 08:49:44.131055 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} err="failed to get container status \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": rpc error: code = NotFound desc = could not find container \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": container with ID starting with ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941 not found: ID does not exist" Mar 20 08:49:44.131181 master-0 kubenswrapper[7476]: I0320 08:49:44.131075 7476 scope.go:117] "RemoveContainer" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" Mar 20 08:49:44.131848 master-0 kubenswrapper[7476]: E0320 08:49:44.131810 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": container with ID starting with f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2 not found: ID does not exist" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" Mar 20 08:49:44.131965 master-0 kubenswrapper[7476]: I0320 08:49:44.131847 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2"} err="failed to get container status \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": rpc error: code = NotFound desc = could not find container \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": container with ID starting with f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2 not found: ID does not exist" Mar 20 08:49:44.131965 master-0 kubenswrapper[7476]: I0320 08:49:44.131869 7476 scope.go:117] "RemoveContainer" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" Mar 20 08:49:44.132643 master-0 kubenswrapper[7476]: E0320 08:49:44.132610 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": container with ID starting with 6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7 not found: ID does not exist" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" Mar 20 08:49:44.132643 master-0 kubenswrapper[7476]: I0320 08:49:44.132641 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7"} err="failed to get container status \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": rpc error: code = NotFound desc = could not find container \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": container with ID starting with 6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7 not found: ID does not exist" Mar 20 08:49:44.132827 master-0 kubenswrapper[7476]: I0320 08:49:44.132659 7476 scope.go:117] "RemoveContainer" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:44.133339 master-0 kubenswrapper[7476]: E0320 08:49:44.133291 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": container with ID starting with 4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249 not found: ID does not exist" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:44.133339 master-0 kubenswrapper[7476]: I0320 08:49:44.133333 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249"} err="failed to get container status \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": rpc error: code = NotFound desc = could not find container \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": container with ID starting with 4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249 not found: ID does not exist" Mar 20 08:49:44.133570 master-0 kubenswrapper[7476]: I0320 08:49:44.133355 7476 scope.go:117] "RemoveContainer" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:49:44.133849 master-0 kubenswrapper[7476]: E0320 08:49:44.133805 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": container with ID starting with c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e not found: ID does not exist" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:49:44.133987 master-0 kubenswrapper[7476]: I0320 08:49:44.133843 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e"} err="failed to get container status \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": rpc error: code = NotFound desc = could not find container \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": container with ID starting with c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e not found: ID does not exist" Mar 20 08:49:44.133987 master-0 kubenswrapper[7476]: I0320 08:49:44.133870 7476 scope.go:117] "RemoveContainer" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" Mar 20 08:49:44.134324 master-0 kubenswrapper[7476]: I0320 08:49:44.134287 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6"} err="failed to get container status \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": rpc error: code = NotFound desc = could not find container \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": container with ID starting with f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6 not found: ID does not exist" Mar 20 08:49:44.134324 master-0 kubenswrapper[7476]: I0320 08:49:44.134314 7476 scope.go:117] "RemoveContainer" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" Mar 20 08:49:44.134762 master-0 kubenswrapper[7476]: I0320 08:49:44.134720 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c"} err="failed to get container status \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": rpc error: code = NotFound desc = could not find container \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": container with ID starting with 2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c not found: ID does not exist" Mar 20 08:49:44.134762 master-0 kubenswrapper[7476]: I0320 08:49:44.134753 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:49:44.135243 master-0 kubenswrapper[7476]: I0320 08:49:44.135203 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} err="failed to get container status \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": rpc error: code = NotFound desc = could not find container \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": container with ID starting with ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941 not found: ID does not exist" Mar 20 08:49:44.135243 master-0 kubenswrapper[7476]: I0320 08:49:44.135233 7476 scope.go:117] "RemoveContainer" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" Mar 20 08:49:44.135664 master-0 kubenswrapper[7476]: I0320 08:49:44.135622 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2"} err="failed to get container status \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": rpc error: code = NotFound desc = could not find container \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": container with ID starting with f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2 not found: ID does not exist" Mar 20 08:49:44.135664 master-0 kubenswrapper[7476]: I0320 08:49:44.135654 7476 scope.go:117] "RemoveContainer" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" Mar 20 08:49:44.136092 master-0 kubenswrapper[7476]: I0320 08:49:44.136050 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7"} err="failed to get container status \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": rpc error: code = NotFound desc = could not find container \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": container with ID starting with 6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7 not found: ID does not exist" Mar 20 08:49:44.136092 master-0 kubenswrapper[7476]: I0320 08:49:44.136079 7476 scope.go:117] "RemoveContainer" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:44.136622 master-0 kubenswrapper[7476]: I0320 08:49:44.136585 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249"} err="failed to get container status \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": rpc error: code = NotFound desc = could not find container \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": container with ID starting with 4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249 not found: ID does not exist" Mar 20 08:49:44.136622 master-0 kubenswrapper[7476]: I0320 08:49:44.136613 7476 scope.go:117] "RemoveContainer" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:49:44.137203 master-0 kubenswrapper[7476]: I0320 08:49:44.137161 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e"} err="failed to get container status \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": rpc error: code = NotFound desc = could not find container \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": container with ID starting with c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e not found: ID does not exist" Mar 20 08:49:44.137203 master-0 kubenswrapper[7476]: I0320 08:49:44.137190 7476 scope.go:117] "RemoveContainer" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" Mar 20 08:49:44.137646 master-0 kubenswrapper[7476]: I0320 08:49:44.137608 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6"} err="failed to get container status \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": rpc error: code = NotFound desc = could not find container \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": container with ID starting with f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6 not found: ID does not exist" Mar 20 08:49:44.137646 master-0 kubenswrapper[7476]: I0320 08:49:44.137634 7476 scope.go:117] "RemoveContainer" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" Mar 20 08:49:44.138015 master-0 kubenswrapper[7476]: I0320 08:49:44.137980 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c"} err="failed to get container status \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": rpc error: code = NotFound desc = could not find container \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": container with ID starting with 2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c not found: ID does not exist" Mar 20 08:49:44.138015 master-0 kubenswrapper[7476]: I0320 08:49:44.138011 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:49:44.138592 master-0 kubenswrapper[7476]: I0320 08:49:44.138554 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} err="failed to get container status \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": rpc error: code = NotFound desc = could not find container \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": container with ID starting with ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941 not found: ID does not exist" Mar 20 08:49:44.138592 master-0 kubenswrapper[7476]: I0320 08:49:44.138582 7476 scope.go:117] "RemoveContainer" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" Mar 20 08:49:44.138892 master-0 kubenswrapper[7476]: I0320 08:49:44.138855 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2"} err="failed to get container status \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": rpc error: code = NotFound desc = could not find container \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": container with ID starting with f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2 not found: ID does not exist" Mar 20 08:49:44.138892 master-0 kubenswrapper[7476]: I0320 08:49:44.138882 7476 scope.go:117] "RemoveContainer" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" Mar 20 08:49:44.139189 master-0 kubenswrapper[7476]: I0320 08:49:44.139153 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7"} err="failed to get container status \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": rpc error: code = NotFound desc = could not find container \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": container with ID starting with 6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7 not found: ID does not exist" Mar 20 08:49:44.139189 master-0 kubenswrapper[7476]: I0320 08:49:44.139183 7476 scope.go:117] "RemoveContainer" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:44.139507 master-0 kubenswrapper[7476]: I0320 08:49:44.139474 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249"} err="failed to get container status \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": rpc error: code = NotFound desc = could not find container \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": container with ID starting with 4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249 not found: ID does not exist" Mar 20 08:49:44.139507 master-0 kubenswrapper[7476]: I0320 08:49:44.139500 7476 scope.go:117] "RemoveContainer" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:49:44.139797 master-0 kubenswrapper[7476]: I0320 08:49:44.139754 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e"} err="failed to get container status \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": rpc error: code = NotFound desc = could not find container \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": container with ID starting with c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e not found: ID does not exist" Mar 20 08:49:44.139797 master-0 kubenswrapper[7476]: I0320 08:49:44.139781 7476 scope.go:117] "RemoveContainer" containerID="f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6" Mar 20 08:49:44.140902 master-0 kubenswrapper[7476]: I0320 08:49:44.140664 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6"} err="failed to get container status \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": rpc error: code = NotFound desc = could not find container \"f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6\": container with ID starting with f78e40298df65c394b57075117fe7f98a0fc45e73d30d6562f972d2e2a4f82d6 not found: ID does not exist" Mar 20 08:49:44.140902 master-0 kubenswrapper[7476]: I0320 08:49:44.140696 7476 scope.go:117] "RemoveContainer" containerID="2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c" Mar 20 08:49:44.141410 master-0 kubenswrapper[7476]: I0320 08:49:44.140986 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c"} err="failed to get container status \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": rpc error: code = NotFound desc = could not find container \"2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c\": container with ID starting with 2c6110300fa54d6ed375548abc413fdc3f31d0ebf2371316f5aaf0aa6152114c not found: ID does not exist" Mar 20 08:49:44.141410 master-0 kubenswrapper[7476]: I0320 08:49:44.141011 7476 scope.go:117] "RemoveContainer" containerID="ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941" Mar 20 08:49:44.141410 master-0 kubenswrapper[7476]: I0320 08:49:44.141274 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941"} err="failed to get container status \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": rpc error: code = NotFound desc = could not find container \"ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941\": container with ID starting with ffa42243f2a40a0ca4a5329f1f45102e1878a8f1a80cf13cc28b8ea97fa30941 not found: ID does not exist" Mar 20 08:49:44.141410 master-0 kubenswrapper[7476]: I0320 08:49:44.141302 7476 scope.go:117] "RemoveContainer" containerID="f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2" Mar 20 08:49:44.141821 master-0 kubenswrapper[7476]: I0320 08:49:44.141777 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2"} err="failed to get container status \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": rpc error: code = NotFound desc = could not find container \"f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2\": container with ID starting with f63f62f8a7067ea3a407516779664a4cf9407bbbc72a0354b7960b7a56a2eeb2 not found: ID does not exist" Mar 20 08:49:44.141821 master-0 kubenswrapper[7476]: I0320 08:49:44.141818 7476 scope.go:117] "RemoveContainer" containerID="6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7" Mar 20 08:49:44.142325 master-0 kubenswrapper[7476]: I0320 08:49:44.142161 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7"} err="failed to get container status \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": rpc error: code = NotFound desc = could not find container \"6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7\": container with ID starting with 6a28afb738c9c2073bc05b9348b9c75a264404f75cc50c365df56617d08f8df7 not found: ID does not exist" Mar 20 08:49:44.142325 master-0 kubenswrapper[7476]: I0320 08:49:44.142187 7476 scope.go:117] "RemoveContainer" containerID="4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249" Mar 20 08:49:44.142629 master-0 kubenswrapper[7476]: I0320 08:49:44.142591 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249"} err="failed to get container status \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": rpc error: code = NotFound desc = could not find container \"4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249\": container with ID starting with 4d5da7565de19239fcc0c39fa988fe40900703ed15ce4e08aa33c43306bfa249 not found: ID does not exist" Mar 20 08:49:44.142629 master-0 kubenswrapper[7476]: I0320 08:49:44.142619 7476 scope.go:117] "RemoveContainer" containerID="c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e" Mar 20 08:49:44.143136 master-0 kubenswrapper[7476]: I0320 08:49:44.143077 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e"} err="failed to get container status \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": rpc error: code = NotFound desc = could not find container \"c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e\": container with ID starting with c247335e00ed8c2dbcd12d654ce2b480b9f5ed4f4f141fa661641d6980a82c7e not found: ID does not exist" Mar 20 08:49:44.182513 master-0 kubenswrapper[7476]: I0320 08:49:44.182444 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.209798 master-0 kubenswrapper[7476]: I0320 08:49:44.209740 7476 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.211471 master-0 kubenswrapper[7476]: W0320 08:49:44.211412 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7a82869988463543d3d8dd1f0b5fe3.slice/crio-26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef WatchSource:0}: Error finding container 26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef: Status 404 returned error can't find the container with id 26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef Mar 20 08:49:44.231018 master-0 kubenswrapper[7476]: E0320 08:49:44.230785 7476 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e807e8bc322f2 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:8e7a82869988463543d3d8dd1f0b5fe3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:49:44.229610226 +0000 UTC m=+865.198378762,LastTimestamp:2026-03-20 08:49:44.229610226 +0000 UTC m=+865.198378762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:49:44.243643 master-0 kubenswrapper[7476]: W0320 08:49:44.243575 7476 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb45ea2ef1cf2bc9d1d994d6538ae0a64.slice/crio-407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17 WatchSource:0}: Error finding container 407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17: Status 404 returned error can't find the container with id 407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17 Mar 20 08:49:44.848844 master-0 kubenswrapper[7476]: I0320 08:49:44.848763 7476 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898" exitCode=0 Mar 20 08:49:44.852311 master-0 kubenswrapper[7476]: I0320 08:49:44.851819 7476 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e" exitCode=0 Mar 20 08:49:44.852311 master-0 kubenswrapper[7476]: I0320 08:49:44.851908 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e"} Mar 20 08:49:44.852311 master-0 kubenswrapper[7476]: I0320 08:49:44.851940 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17"} Mar 20 08:49:44.853204 master-0 kubenswrapper[7476]: I0320 08:49:44.853172 7476 status_manager.go:851] "Failed to get status for pod" podUID="8c753d068f364b16e3aeb8396b7d9f33" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:44.853513 master-0 kubenswrapper[7476]: E0320 08:49:44.853388 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:49:44.853856 master-0 kubenswrapper[7476]: I0320 08:49:44.853830 7476 generic.go:334] "Generic (PLEG): container finished" podID="9775cc27-53b9-4d21-a98b-84b39ada32ee" containerID="8b5711cce3fb17d8c5298b374ea763f137a6631ab7f8f0ff687f48b345639df0" exitCode=0 Mar 20 08:49:44.853930 master-0 kubenswrapper[7476]: I0320 08:49:44.853887 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9775cc27-53b9-4d21-a98b-84b39ada32ee","Type":"ContainerDied","Data":"8b5711cce3fb17d8c5298b374ea763f137a6631ab7f8f0ff687f48b345639df0"} Mar 20 08:49:44.855023 master-0 kubenswrapper[7476]: I0320 08:49:44.854622 7476 status_manager.go:851] "Failed to get status for pod" podUID="9775cc27-53b9-4d21-a98b-84b39ada32ee" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:44.855471 master-0 kubenswrapper[7476]: I0320 08:49:44.855375 7476 status_manager.go:851] "Failed to get status for pod" podUID="8c753d068f364b16e3aeb8396b7d9f33" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:44.858997 master-0 kubenswrapper[7476]: I0320 08:49:44.858966 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0"} Mar 20 08:49:44.859092 master-0 kubenswrapper[7476]: I0320 08:49:44.859005 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef"} Mar 20 08:49:44.862324 master-0 kubenswrapper[7476]: I0320 08:49:44.860699 7476 status_manager.go:851] "Failed to get status for pod" podUID="9775cc27-53b9-4d21-a98b-84b39ada32ee" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:44.862324 master-0 kubenswrapper[7476]: E0320 08:49:44.860811 7476 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:49:44.862324 master-0 kubenswrapper[7476]: I0320 08:49:44.861353 7476 status_manager.go:851] "Failed to get status for pod" podUID="8c753d068f364b16e3aeb8396b7d9f33" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:45.173148 master-0 kubenswrapper[7476]: I0320 08:49:45.172635 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:45.174618 master-0 kubenswrapper[7476]: I0320 08:49:45.173565 7476 status_manager.go:851] "Failed to get status for pod" podUID="9775cc27-53b9-4d21-a98b-84b39ada32ee" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:45.174618 master-0 kubenswrapper[7476]: I0320 08:49:45.174065 7476 status_manager.go:851] "Failed to get status for pod" podUID="8c753d068f364b16e3aeb8396b7d9f33" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:45.174618 master-0 kubenswrapper[7476]: I0320 08:49:45.174523 7476 status_manager.go:851] "Failed to get status for pod" podUID="75cef5aa-93e6-4b8b-9ab1-06809e85883a" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-3-retry-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:49:45.366392 master-0 kubenswrapper[7476]: I0320 08:49:45.366332 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " Mar 20 08:49:45.366637 master-0 kubenswrapper[7476]: I0320 08:49:45.366420 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " Mar 20 08:49:45.366637 master-0 kubenswrapper[7476]: I0320 08:49:45.366499 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " Mar 20 08:49:45.366843 master-0 kubenswrapper[7476]: I0320 08:49:45.366794 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75cef5aa-93e6-4b8b-9ab1-06809e85883a" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:45.367134 master-0 kubenswrapper[7476]: I0320 08:49:45.367065 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock" (OuterVolumeSpecName: "var-lock") pod "75cef5aa-93e6-4b8b-9ab1-06809e85883a" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:45.371429 master-0 kubenswrapper[7476]: I0320 08:49:45.369607 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75cef5aa-93e6-4b8b-9ab1-06809e85883a" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:45.468527 master-0 kubenswrapper[7476]: I0320 08:49:45.468443 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:45.468527 master-0 kubenswrapper[7476]: I0320 08:49:45.468500 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:45.468527 master-0 kubenswrapper[7476]: I0320 08:49:45.468523 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:45.885349 master-0 kubenswrapper[7476]: I0320 08:49:45.885208 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:49:45.885802 master-0 kubenswrapper[7476]: I0320 08:49:45.885212 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"75cef5aa-93e6-4b8b-9ab1-06809e85883a","Type":"ContainerDied","Data":"1f303ba8c534fdd01d1d1d736d392f617339c8123f70b84cbefb43516aed9bd0"} Mar 20 08:49:45.885802 master-0 kubenswrapper[7476]: I0320 08:49:45.885491 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f303ba8c534fdd01d1d1d736d392f617339c8123f70b84cbefb43516aed9bd0" Mar 20 08:49:45.940987 master-0 kubenswrapper[7476]: I0320 08:49:45.936734 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd"} Mar 20 08:49:45.940987 master-0 kubenswrapper[7476]: I0320 08:49:45.936779 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c"} Mar 20 08:49:45.940987 master-0 kubenswrapper[7476]: I0320 08:49:45.936789 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f"} Mar 20 08:49:46.322403 master-0 kubenswrapper[7476]: I0320 08:49:46.322354 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:46.488912 master-0 kubenswrapper[7476]: I0320 08:49:46.488785 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"9775cc27-53b9-4d21-a98b-84b39ada32ee\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " Mar 20 08:49:46.489110 master-0 kubenswrapper[7476]: I0320 08:49:46.488995 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"9775cc27-53b9-4d21-a98b-84b39ada32ee\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " Mar 20 08:49:46.489110 master-0 kubenswrapper[7476]: I0320 08:49:46.489092 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"9775cc27-53b9-4d21-a98b-84b39ada32ee\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " Mar 20 08:49:46.489615 master-0 kubenswrapper[7476]: I0320 08:49:46.489579 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9775cc27-53b9-4d21-a98b-84b39ada32ee" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.490373 master-0 kubenswrapper[7476]: I0320 08:49:46.490316 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock" (OuterVolumeSpecName: "var-lock") pod "9775cc27-53b9-4d21-a98b-84b39ada32ee" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.493087 master-0 kubenswrapper[7476]: I0320 08:49:46.493022 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9775cc27-53b9-4d21-a98b-84b39ada32ee" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:46.590347 master-0 kubenswrapper[7476]: I0320 08:49:46.590258 7476 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.590347 master-0 kubenswrapper[7476]: I0320 08:49:46.590336 7476 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.590347 master-0 kubenswrapper[7476]: I0320 08:49:46.590353 7476 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.698917 master-0 kubenswrapper[7476]: I0320 08:49:46.698863 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896705 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896768 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896804 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896812 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896834 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896851 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896859 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs" (OuterVolumeSpecName: "logs") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.896910 master-0 kubenswrapper[7476]: I0320 08:49:46.896897 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.896934 7476 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") pod \"49fac1b46a11e49501805e891baae4a9\" (UID: \"49fac1b46a11e49501805e891baae4a9\") " Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.896942 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config" (OuterVolumeSpecName: "config") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.896948 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897035 7476 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets" (OuterVolumeSpecName: "secrets") pod "49fac1b46a11e49501805e891baae4a9" (UID: "49fac1b46a11e49501805e891baae4a9"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897150 7476 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-secrets\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897160 7476 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897169 7476 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897178 7476 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-logs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897186 7476 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.898146 master-0 kubenswrapper[7476]: I0320 08:49:46.897194 7476 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/49fac1b46a11e49501805e891baae4a9-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 20 08:49:46.974473 master-0 kubenswrapper[7476]: I0320 08:49:46.973708 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9775cc27-53b9-4d21-a98b-84b39ada32ee","Type":"ContainerDied","Data":"3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3"} Mar 20 08:49:46.974473 master-0 kubenswrapper[7476]: I0320 08:49:46.973748 7476 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3" Mar 20 08:49:46.974473 master-0 kubenswrapper[7476]: I0320 08:49:46.973801 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:49:46.984397 master-0 kubenswrapper[7476]: I0320 08:49:46.984316 7476 generic.go:334] "Generic (PLEG): container finished" podID="49fac1b46a11e49501805e891baae4a9" containerID="996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c" exitCode=0 Mar 20 08:49:46.984588 master-0 kubenswrapper[7476]: I0320 08:49:46.984465 7476 scope.go:117] "RemoveContainer" containerID="21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898" Mar 20 08:49:46.984713 master-0 kubenswrapper[7476]: I0320 08:49:46.984679 7476 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 20 08:49:47.062677 master-0 kubenswrapper[7476]: I0320 08:49:47.059793 7476 scope.go:117] "RemoveContainer" containerID="996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c" Mar 20 08:49:47.069147 master-0 kubenswrapper[7476]: I0320 08:49:47.069092 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"4af0d14e2080acfab9b4be1c21f5c397bb2b57510a3ab1d14b3ae883125de902"} Mar 20 08:49:47.069147 master-0 kubenswrapper[7476]: I0320 08:49:47.069152 7476 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed"} Mar 20 08:49:47.093852 master-0 kubenswrapper[7476]: I0320 08:49:47.093539 7476 scope.go:117] "RemoveContainer" containerID="505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8" Mar 20 08:49:47.131203 master-0 kubenswrapper[7476]: I0320 08:49:47.131170 7476 scope.go:117] "RemoveContainer" containerID="21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898" Mar 20 08:49:47.131703 master-0 kubenswrapper[7476]: E0320 08:49:47.131657 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898\": container with ID starting with 21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898 not found: ID does not exist" containerID="21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898" Mar 20 08:49:47.131759 master-0 kubenswrapper[7476]: I0320 08:49:47.131699 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898"} err="failed to get container status \"21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898\": rpc error: code = NotFound desc = could not find container \"21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898\": container with ID starting with 21f174bcce18cc2a151013015a2ad5940b94d6576aae52b77f0f5fec8f803898 not found: ID does not exist" Mar 20 08:49:47.131759 master-0 kubenswrapper[7476]: I0320 08:49:47.131725 7476 scope.go:117] "RemoveContainer" containerID="996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c" Mar 20 08:49:47.132164 master-0 kubenswrapper[7476]: E0320 08:49:47.132134 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c\": container with ID starting with 996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c not found: ID does not exist" containerID="996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c" Mar 20 08:49:47.132232 master-0 kubenswrapper[7476]: I0320 08:49:47.132179 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c"} err="failed to get container status \"996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c\": rpc error: code = NotFound desc = could not find container \"996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c\": container with ID starting with 996fe2d1381bcbb6636ef59ecc1b88b6322ed1e9bf66452efbeee5d66a74da5c not found: ID does not exist" Mar 20 08:49:47.132232 master-0 kubenswrapper[7476]: I0320 08:49:47.132207 7476 scope.go:117] "RemoveContainer" containerID="505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8" Mar 20 08:49:47.132623 master-0 kubenswrapper[7476]: E0320 08:49:47.132574 7476 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8\": container with ID starting with 505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8 not found: ID does not exist" containerID="505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8" Mar 20 08:49:47.132623 master-0 kubenswrapper[7476]: I0320 08:49:47.132599 7476 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8"} err="failed to get container status \"505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8\": rpc error: code = NotFound desc = could not find container \"505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8\": container with ID starting with 505f9b181b16cda62a4db44fc15ff68ecb1cc5d03f0c388d0ae9c508776f0bb8 not found: ID does not exist" Mar 20 08:49:47.248323 master-0 kubenswrapper[7476]: I0320 08:49:47.248224 7476 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49fac1b46a11e49501805e891baae4a9" path="/var/lib/kubelet/pods/49fac1b46a11e49501805e891baae4a9/volumes" Mar 20 08:49:47.248728 master-0 kubenswrapper[7476]: I0320 08:49:47.248706 7476 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 20 08:49:48.034211 master-0 kubenswrapper[7476]: E0320 08:49:48.034154 7476 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 08:49:48.035658 master-0 kubenswrapper[7476]: E0320 08:49:48.035618 7476 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 08:49:48.037135 master-0 kubenswrapper[7476]: E0320 08:49:48.037078 7476 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 20 08:49:48.037213 master-0 kubenswrapper[7476]: E0320 08:49:48.037145 7476 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" containerName="kube-multus-additional-cni-plugins" Mar 20 08:49:49.763819 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 20 08:49:49.822780 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 20 08:49:49.823011 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 20 08:49:49.833064 master-0 systemd[1]: kubelet.service: Consumed 2min 14.120s CPU time. Mar 20 08:49:49.854965 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 20 08:49:49.940906 master-0 kubenswrapper[27820]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:49:49.940906 master-0 kubenswrapper[27820]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 08:49:49.940906 master-0 kubenswrapper[27820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:49:49.940906 master-0 kubenswrapper[27820]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:49:49.940906 master-0 kubenswrapper[27820]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 08:49:49.940906 master-0 kubenswrapper[27820]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 08:49:49.941558 master-0 kubenswrapper[27820]: I0320 08:49:49.940975 27820 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 08:49:49.943263 master-0 kubenswrapper[27820]: W0320 08:49:49.943229 27820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:49:49.943263 master-0 kubenswrapper[27820]: W0320 08:49:49.943253 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:49:49.943263 master-0 kubenswrapper[27820]: W0320 08:49:49.943262 27820 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:49:49.943263 master-0 kubenswrapper[27820]: W0320 08:49:49.943268 27820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943287 27820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943294 27820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943299 27820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943304 27820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943309 27820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943313 27820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943317 27820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943321 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943324 27820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943328 27820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943333 27820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943337 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943340 27820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943344 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943348 27820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943351 27820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943356 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943363 27820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943371 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:49:49.943413 master-0 kubenswrapper[27820]: W0320 08:49:49.943377 27820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943382 27820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943389 27820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943395 27820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943402 27820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943407 27820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943412 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943417 27820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943421 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943425 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943428 27820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943432 27820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943438 27820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943444 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943448 27820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943452 27820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943455 27820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943459 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943462 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:49:49.943906 master-0 kubenswrapper[27820]: W0320 08:49:49.943466 27820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943471 27820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943476 27820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943480 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943484 27820 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943487 27820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943522 27820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943527 27820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943530 27820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943534 27820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943538 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943542 27820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943545 27820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943549 27820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943553 27820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943556 27820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943560 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943564 27820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943567 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943571 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:49:49.944400 master-0 kubenswrapper[27820]: W0320 08:49:49.943574 27820 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943578 27820 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943582 27820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943586 27820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943590 27820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943594 27820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943597 27820 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943601 27820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943605 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: W0320 08:49:49.943608 27820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943720 27820 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943729 27820 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943738 27820 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943745 27820 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943752 27820 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943757 27820 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943762 27820 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943769 27820 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943774 27820 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943779 27820 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943783 27820 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 08:49:49.945846 master-0 kubenswrapper[27820]: I0320 08:49:49.943788 27820 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943792 27820 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943797 27820 flags.go:64] FLAG: --cgroup-root="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943800 27820 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943805 27820 flags.go:64] FLAG: --client-ca-file="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943809 27820 flags.go:64] FLAG: --cloud-config="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943813 27820 flags.go:64] FLAG: --cloud-provider="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943817 27820 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943822 27820 flags.go:64] FLAG: --cluster-domain="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943826 27820 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943830 27820 flags.go:64] FLAG: --config-dir="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943834 27820 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943839 27820 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943844 27820 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943849 27820 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943853 27820 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943858 27820 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943862 27820 flags.go:64] FLAG: --contention-profiling="false" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943866 27820 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943871 27820 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943875 27820 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943889 27820 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943895 27820 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943900 27820 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943904 27820 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 08:49:49.947218 master-0 kubenswrapper[27820]: I0320 08:49:49.943909 27820 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943913 27820 flags.go:64] FLAG: --enable-server="true" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943918 27820 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943925 27820 flags.go:64] FLAG: --event-burst="100" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943930 27820 flags.go:64] FLAG: --event-qps="50" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943934 27820 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943940 27820 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943944 27820 flags.go:64] FLAG: --eviction-hard="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943950 27820 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943954 27820 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943958 27820 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943963 27820 flags.go:64] FLAG: --eviction-soft="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943967 27820 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943971 27820 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943975 27820 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943979 27820 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943983 27820 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943987 27820 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943991 27820 flags.go:64] FLAG: --feature-gates="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.943999 27820 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944003 27820 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944008 27820 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944012 27820 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944016 27820 flags.go:64] FLAG: --healthz-port="10248" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944021 27820 flags.go:64] FLAG: --help="false" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944025 27820 flags.go:64] FLAG: --hostname-override="" Mar 20 08:49:49.947986 master-0 kubenswrapper[27820]: I0320 08:49:49.944029 27820 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944033 27820 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944038 27820 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944042 27820 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944047 27820 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944051 27820 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944055 27820 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944059 27820 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944063 27820 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944067 27820 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944072 27820 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944081 27820 flags.go:64] FLAG: --kube-reserved="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944086 27820 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944090 27820 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944094 27820 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944098 27820 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944102 27820 flags.go:64] FLAG: --lock-file="" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944106 27820 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944110 27820 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944115 27820 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944121 27820 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944125 27820 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944130 27820 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944134 27820 flags.go:64] FLAG: --logging-format="text" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944139 27820 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 08:49:49.948640 master-0 kubenswrapper[27820]: I0320 08:49:49.944143 27820 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944147 27820 flags.go:64] FLAG: --manifest-url="" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944152 27820 flags.go:64] FLAG: --manifest-url-header="" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944158 27820 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944162 27820 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944167 27820 flags.go:64] FLAG: --max-pods="110" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944171 27820 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944176 27820 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944180 27820 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944184 27820 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944189 27820 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944193 27820 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944197 27820 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944207 27820 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944211 27820 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944216 27820 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944220 27820 flags.go:64] FLAG: --pod-cidr="" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944224 27820 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:53d66d524ca3e787d8dbe30dbc4d9b8612c9cebd505ccb4375a8441814e85422" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944231 27820 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944236 27820 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944241 27820 flags.go:64] FLAG: --pods-per-core="0" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944245 27820 flags.go:64] FLAG: --port="10250" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944249 27820 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944254 27820 flags.go:64] FLAG: --provider-id="" Mar 20 08:49:49.950001 master-0 kubenswrapper[27820]: I0320 08:49:49.944258 27820 flags.go:64] FLAG: --qos-reserved="" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944265 27820 flags.go:64] FLAG: --read-only-port="10255" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944269 27820 flags.go:64] FLAG: --register-node="true" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944283 27820 flags.go:64] FLAG: --register-schedulable="true" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944289 27820 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944297 27820 flags.go:64] FLAG: --registry-burst="10" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944301 27820 flags.go:64] FLAG: --registry-qps="5" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944305 27820 flags.go:64] FLAG: --reserved-cpus="" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944310 27820 flags.go:64] FLAG: --reserved-memory="" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944315 27820 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944319 27820 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944323 27820 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944328 27820 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944332 27820 flags.go:64] FLAG: --runonce="false" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944336 27820 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944340 27820 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944344 27820 flags.go:64] FLAG: --seccomp-default="false" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944349 27820 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944353 27820 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944357 27820 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944361 27820 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944365 27820 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944369 27820 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944373 27820 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944378 27820 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 08:49:49.953304 master-0 kubenswrapper[27820]: I0320 08:49:49.944383 27820 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944387 27820 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944394 27820 flags.go:64] FLAG: --system-cgroups="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944398 27820 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944405 27820 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944409 27820 flags.go:64] FLAG: --tls-cert-file="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944414 27820 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944420 27820 flags.go:64] FLAG: --tls-min-version="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944424 27820 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944428 27820 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944433 27820 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944437 27820 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944441 27820 flags.go:64] FLAG: --v="2" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944447 27820 flags.go:64] FLAG: --version="false" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944453 27820 flags.go:64] FLAG: --vmodule="" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944459 27820 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: I0320 08:49:49.944463 27820 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944913 27820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944929 27820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944935 27820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944941 27820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944946 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944957 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944962 27820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:49:49.954155 master-0 kubenswrapper[27820]: W0320 08:49:49.944967 27820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.944972 27820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.944977 27820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.944982 27820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.944987 27820 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.944992 27820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.944997 27820 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945002 27820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945006 27820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945012 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945022 27820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945065 27820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945071 27820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945077 27820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945083 27820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945090 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945094 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945099 27820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945104 27820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945108 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:49:49.955145 master-0 kubenswrapper[27820]: W0320 08:49:49.945113 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945117 27820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945122 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945130 27820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945134 27820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945139 27820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945143 27820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945148 27820 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945152 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945157 27820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945162 27820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945166 27820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945172 27820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945177 27820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945183 27820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945193 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945198 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945204 27820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:49:49.955815 master-0 kubenswrapper[27820]: W0320 08:49:49.945209 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945214 27820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945218 27820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945223 27820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945227 27820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945232 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945238 27820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945243 27820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945247 27820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945252 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945264 27820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945269 27820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945289 27820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945295 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945299 27820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945304 27820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945308 27820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945313 27820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945318 27820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945323 27820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:49:49.956531 master-0 kubenswrapper[27820]: W0320 08:49:49.945327 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.945332 27820 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.945340 27820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.945345 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.945350 27820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.945355 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.945359 27820 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: I0320 08:49:49.945372 27820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: I0320 08:49:49.950577 27820 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: I0320 08:49:49.950608 27820 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.950685 27820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.950693 27820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.950698 27820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.950703 27820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.950707 27820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:49:49.957513 master-0 kubenswrapper[27820]: W0320 08:49:49.950712 27820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950717 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950721 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950725 27820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950730 27820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950734 27820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950739 27820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950743 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950748 27820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950752 27820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950757 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950762 27820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950766 27820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950770 27820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950774 27820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950778 27820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950783 27820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950787 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950792 27820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950796 27820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:49:49.958125 master-0 kubenswrapper[27820]: W0320 08:49:49.950801 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950806 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950810 27820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950814 27820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950819 27820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950824 27820 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950830 27820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950836 27820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950841 27820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950845 27820 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950850 27820 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950855 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950859 27820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950863 27820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950866 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950870 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950873 27820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950877 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950881 27820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950885 27820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:49:49.959119 master-0 kubenswrapper[27820]: W0320 08:49:49.950889 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950892 27820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950896 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950901 27820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950906 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950910 27820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950915 27820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950921 27820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950926 27820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950930 27820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950934 27820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950937 27820 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950941 27820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950946 27820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950951 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950955 27820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950960 27820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950963 27820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:49:49.959943 master-0 kubenswrapper[27820]: W0320 08:49:49.950969 27820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950973 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950976 27820 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950980 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950984 27820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950987 27820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950991 27820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950995 27820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.950998 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: I0320 08:49:49.951005 27820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.951131 27820 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.951139 27820 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.951143 27820 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.951148 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.951152 27820 feature_gate.go:330] unrecognized feature gate: Example Mar 20 08:49:49.960730 master-0 kubenswrapper[27820]: W0320 08:49:49.951157 27820 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951161 27820 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951165 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951168 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951172 27820 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951176 27820 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951179 27820 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951183 27820 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951186 27820 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951191 27820 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951196 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951200 27820 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951204 27820 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951208 27820 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951212 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951216 27820 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951221 27820 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951224 27820 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951228 27820 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951233 27820 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 08:49:49.961247 master-0 kubenswrapper[27820]: W0320 08:49:49.951237 27820 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951241 27820 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951245 27820 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951249 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951257 27820 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951266 27820 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951270 27820 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951295 27820 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951302 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951307 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951312 27820 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951317 27820 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951321 27820 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951324 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951329 27820 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951332 27820 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951336 27820 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951339 27820 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951343 27820 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951348 27820 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 08:49:49.962056 master-0 kubenswrapper[27820]: W0320 08:49:49.951352 27820 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951357 27820 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951360 27820 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951364 27820 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951367 27820 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951375 27820 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951379 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951383 27820 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951387 27820 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951391 27820 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951394 27820 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951400 27820 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951404 27820 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951408 27820 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951411 27820 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951416 27820 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951426 27820 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951432 27820 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951436 27820 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 20 08:49:49.962720 master-0 kubenswrapper[27820]: W0320 08:49:49.951440 27820 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951444 27820 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951449 27820 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951453 27820 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951456 27820 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951460 27820 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951464 27820 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: W0320 08:49:49.951470 27820 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: I0320 08:49:49.951477 27820 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: I0320 08:49:49.951644 27820 server.go:940] "Client rotation is on, will bootstrap in background" Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: I0320 08:49:49.953090 27820 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: I0320 08:49:49.953170 27820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: I0320 08:49:49.953402 27820 server.go:997] "Starting client certificate rotation" Mar 20 08:49:49.963175 master-0 kubenswrapper[27820]: I0320 08:49:49.953415 27820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.953831 27820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 04:53:16.021308807 +0000 UTC Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.953889 27820 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 20h3m26.067422663s for next certificate rotation Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.954139 27820 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.955444 27820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.957305 27820 log.go:25] "Validated CRI v1 runtime API" Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.961133 27820 log.go:25] "Validated CRI v1 image API" Mar 20 08:49:49.963531 master-0 kubenswrapper[27820]: I0320 08:49:49.962013 27820 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 08:49:49.971378 master-0 kubenswrapper[27820]: I0320 08:49:49.971338 27820 fs.go:135] Filesystem UUIDs: map[7B77-95E7:/dev/vda2 8bd1c714-85b3-42d8-843c-32eb4beee773:/dev/vda3 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 20 08:49:49.972110 master-0 kubenswrapper[27820]: I0320 08:49:49.971367 27820 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d/userdata/shm major:0 minor:1202 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421/userdata/shm major:0 minor:1070 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be/userdata/shm major:0 minor:1015 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b/userdata/shm major:0 minor:235 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee/userdata/shm major:0 minor:1058 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc/userdata/shm major:0 minor:374 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8/userdata/shm major:0 minor:288 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/32d9278f90869a47d37ec354771e3c987fb65e24d65a9e7aa9b31e8b1fade86f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/32d9278f90869a47d37ec354771e3c987fb65e24d65a9e7aa9b31e8b1fade86f/userdata/shm major:0 minor:825 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e/userdata/shm major:0 minor:555 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25/userdata/shm major:0 minor:247 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5/userdata/shm major:0 minor:447 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/39979795a082384fa347e48c6bcdc4249850e6dc951d407d07457e2b43d36f11/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/39979795a082384fa347e48c6bcdc4249850e6dc951d407d07457e2b43d36f11/userdata/shm major:0 minor:420 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28/userdata/shm major:0 minor:350 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947/userdata/shm major:0 minor:68 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17/userdata/shm major:0 minor:60 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97/userdata/shm major:0 minor:1152 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980/userdata/shm major:0 minor:296 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a62432d7ca6978a89473ee0ca3560d8d6e151e4b44cc680fcbcde36344cda3f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a62432d7ca6978a89473ee0ca3560d8d6e151e4b44cc680fcbcde36344cda3f/userdata/shm major:0 minor:1018 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97/userdata/shm major:0 minor:557 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11/userdata/shm major:0 minor:1020 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/54f91a8b386ea81f3c1ff44f7cbcccad1987fab184d5bfad4c46374f7827fa5c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/54f91a8b386ea81f3c1ff44f7cbcccad1987fab184d5bfad4c46374f7827fa5c/userdata/shm major:0 minor:993 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5a96373b7ec998e4c12966e11a5d5e48263b669f4268036f6aff8f1f1199dfa5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5a96373b7ec998e4c12966e11a5d5e48263b669f4268036f6aff8f1f1199dfa5/userdata/shm major:0 minor:829 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5baf379ef595e5427aa5f7376ffa996583f39c05c81ca9fe28df973ed2c426be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5baf379ef595e5427aa5f7376ffa996583f39c05c81ca9fe28df973ed2c426be/userdata/shm major:0 minor:811 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5c5ae9bfcc3ce85bdfe3cccc194f20c35db6cc7998e4967e566b59f8729c9691/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5c5ae9bfcc3ce85bdfe3cccc194f20c35db6cc7998e4967e566b59f8729c9691/userdata/shm major:0 minor:490 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6018dc62d387a9b77f99180b9b59d3182e437f628eb7fce91bb3764fe4982ba6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6018dc62d387a9b77f99180b9b59d3182e437f628eb7fce91bb3764fe4982ba6/userdata/shm major:0 minor:950 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510/userdata/shm major:0 minor:567 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/64ca7ad287a18077a9681b1e546ec20fe155067ef4ae153360b9f6ad5ecbcb02/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/64ca7ad287a18077a9681b1e546ec20fe155067ef4ae153360b9f6ad5ecbcb02/userdata/shm major:0 minor:1092 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/66f60747a10071044a32fdd3eb286bdb47b644ac36047fe8a2be062c88967367/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/66f60747a10071044a32fdd3eb286bdb47b644ac36047fe8a2be062c88967367/userdata/shm major:0 minor:835 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a/userdata/shm major:0 minor:1106 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6b78ee1b02c98b4ad9c3b944fdd43e9881371557e0d7b10564d5be8bd02396af/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6b78ee1b02c98b4ad9c3b944fdd43e9881371557e0d7b10564d5be8bd02396af/userdata/shm major:0 minor:649 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/702713f2f96146013bc9672b7b029fe7154bd722d3f9153e565a46fd2b9a50ba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/702713f2f96146013bc9672b7b029fe7154bd722d3f9153e565a46fd2b9a50ba/userdata/shm major:0 minor:444 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3/userdata/shm major:0 minor:233 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5/userdata/shm major:0 minor:579 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327/userdata/shm major:0 minor:266 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7ed933ad5ab2402e750d28bcdcc40b75fc2d12d35fd030d2dca7b16f6da20585/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7ed933ad5ab2402e750d28bcdcc40b75fc2d12d35fd030d2dca7b16f6da20585/userdata/shm major:0 minor:669 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8278eeebf68b018edbef1798293f552dd9859c6fa057a3f48528a25426e7abf3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8278eeebf68b018edbef1798293f552dd9859c6fa057a3f48528a25426e7abf3/userdata/shm major:0 minor:309 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6/userdata/shm major:0 minor:1191 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/88728a20ccc0653acaf97665b53dae69b14ad65649feac36dc7ea652a98e2296/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/88728a20ccc0653acaf97665b53dae69b14ad65649feac36dc7ea652a98e2296/userdata/shm major:0 minor:510 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8c5a039db74fb9e788a5aa01defc8a1f9fd1088c2644177e24de4994f3a27cd3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8c5a039db74fb9e788a5aa01defc8a1f9fd1088c2644177e24de4994f3a27cd3/userdata/shm major:0 minor:807 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09/userdata/shm major:0 minor:647 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916/userdata/shm major:0 minor:284 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf/userdata/shm major:0 minor:559 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750/userdata/shm major:0 minor:278 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875/userdata/shm major:0 minor:591 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957/userdata/shm major:0 minor:42 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810/userdata/shm major:0 minor:774 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aab851b1602b7dcc6e5620b34b9265b9ec9a6fe42b3748c9be972ac30f7ef4fd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aab851b1602b7dcc6e5620b34b9265b9ec9a6fe42b3748c9be972ac30f7ef4fd/userdata/shm major:0 minor:564 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b1a6bfe0069db4370471806f444b8cbb38ac33f0aab60a3239aafba8901aaf7e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b1a6bfe0069db4370471806f444b8cbb38ac33f0aab60a3239aafba8901aaf7e/userdata/shm major:0 minor:553 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3/userdata/shm major:0 minor:231 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b36d4f6b43dcaa09ca3c55b7c20167210b34481854d09dfefb8adca147e001f9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b36d4f6b43dcaa09ca3c55b7c20167210b34481854d09dfefb8adca147e001f9/userdata/shm major:0 minor:836 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7b1e72d13c6e7c1a14867c5547562b82b9b40ac636f0328d795dcff8a14b2b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7b1e72d13c6e7c1a14867c5547562b82b9b40ac636f0328d795dcff8a14b2b8/userdata/shm major:0 minor:776 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b7d9c365d304102d31836e754ae3ccd0da492c6691ee23225b141aea9b82a5d5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b7d9c365d304102d31836e754ae3ccd0da492c6691ee23225b141aea9b82a5d5/userdata/shm major:0 minor:1021 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b9cc3cdb71ca86a1d6eb5065d5ba830d901adeb7f41acd8f39de6f44ff6001ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b9cc3cdb71ca86a1d6eb5065d5ba830d901adeb7f41acd8f39de6f44ff6001ce/userdata/shm major:0 minor:381 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bce60995e913b204c4470a4a4b36d406c096a66e95b110179e1a1c0fbcc39e0a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bce60995e913b204c4470a4a4b36d406c096a66e95b110179e1a1c0fbcc39e0a/userdata/shm major:0 minor:808 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f/userdata/shm major:0 minor:668 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a/userdata/shm major:0 minor:560 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218/userdata/shm major:0 minor:1111 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ccabd735cd283aaf872e4d4c6439fc21d25d047aca8d8580112cec5049c44ca7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ccabd735cd283aaf872e4d4c6439fc21d25d047aca8d8580112cec5049c44ca7/userdata/shm major:0 minor:898 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e/userdata/shm major:0 minor:449 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e/userdata/shm major:0 minor:641 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e19e3ca7f7f87202999ccf51b5e641a2b701234ac17e2a8733f102ed0960e44b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e19e3ca7f7f87202999ccf51b5e641a2b701234ac17e2a8733f102ed0960e44b/userdata/shm major:0 minor:443 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e752098827604ca63ef6b84cdd36804c65e5654f7ec3055912844eb8b6ef68db/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e752098827604ca63ef6b84cdd36804c65e5654f7ec3055912844eb8b6ef68db/userdata/shm major:0 minor:838 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3/userdata/shm major:0 minor:639 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d/userdata/shm major:0 minor:339 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef964aa716088965516a6b12f87facd648776f7eece032982375b00853e3a703/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef964aa716088965516a6b12f87facd648776f7eece032982375b00853e3a703/userdata/shm major:0 minor:501 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a/userdata/shm major:0 minor:276 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83/userdata/shm major:0 minor:105 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f8f3e1fa6ad1dbd5474f44502cbcf37e1e64719e20d78c379498d77edb6fab10/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f8f3e1fa6ad1dbd5474f44502cbcf37e1e64719e20d78c379498d77edb6fab10/userdata/shm major:0 minor:552 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~projected/kube-api-access-b67hn:{mountpoint:/var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~projected/kube-api-access-b67hn major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~secret/metrics-certs major:0 minor:551 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~projected/kube-api-access-wkh2f:{mountpoint:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~projected/kube-api-access-wkh2f major:0 minor:1151 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/client-ca-bundle:{mountpoint:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/client-ca-bundle major:0 minor:1149 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/secret-metrics-client-certs:{mountpoint:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/secret-metrics-client-certs major:0 minor:1150 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/secret-metrics-server-tls:{mountpoint:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/secret-metrics-server-tls major:0 minor:1145 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047/volumes/kubernetes.io~projected/kube-api-access-ncztx:{mountpoint:/var/lib/kubelet/pods/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047/volumes/kubernetes.io~projected/kube-api-access-ncztx major:0 minor:1012 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~projected/ca-certs major:0 minor:620 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~projected/kube-api-access-tvqv5:{mountpoint:/var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~projected/kube-api-access-tvqv5 major:0 minor:621 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:619 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~projected/kube-api-access-8xv94:{mountpoint:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~projected/kube-api-access-8xv94 major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~projected/kube-api-access-5lnpz:{mountpoint:/var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~projected/kube-api-access-5lnpz major:0 minor:1069 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config major:0 minor:1064 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~secret/prometheus-operator-tls:{mountpoint:/var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~secret/prometheus-operator-tls major:0 minor:1068 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~projected/kube-api-access-v29ws:{mountpoint:/var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~projected/kube-api-access-v29ws major:0 minor:788 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:787 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~secret/webhook-cert major:0 minor:789 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~projected/kube-api-access-rdsv9:{mountpoint:/var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~projected/kube-api-access-rdsv9 major:0 minor:261 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:549 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f725c4a-234c-44e9-95f2-73f31d2b0fd3/volumes/kubernetes.io~projected/kube-api-access-r22fm:{mountpoint:/var/lib/kubelet/pods/0f725c4a-234c-44e9-95f2-73f31d2b0fd3/volumes/kubernetes.io~projected/kube-api-access-r22fm major:0 minor:312 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0f725c4a-234c-44e9-95f2-73f31d2b0fd3/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/0f725c4a-234c-44e9-95f2-73f31d2b0fd3/volumes/kubernetes.io~secret/proxy-tls major:0 minor:311 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~projected/kube-api-access-dtt44:{mountpoint:/var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~projected/kube-api-access-dtt44 major:0 minor:1090 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config major:0 minor:1088 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~secret/node-exporter-tls:{mountpoint:/var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~secret/node-exporter-tls major:0 minor:1097 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14ef046f-b284-457f-ad7a-b7958cb82dd5/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/14ef046f-b284-457f-ad7a-b7958cb82dd5/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1009 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/volumes/kubernetes.io~projected/kube-api-access-hlgd7:{mountpoint:/var/lib/kubelet/pods/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/volumes/kubernetes.io~projected/kube-api-access-hlgd7 major:0 minor:833 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:777 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1746482a-d1a3-4eac-8bc9-643b6af75163/volumes/kubernetes.io~projected/kube-api-access-2dkqm:{mountpoint:/var/lib/kubelet/pods/1746482a-d1a3-4eac-8bc9-643b6af75163/volumes/kubernetes.io~projected/kube-api-access-2dkqm major:0 minor:380 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1746482a-d1a3-4eac-8bc9-643b6af75163/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/1746482a-d1a3-4eac-8bc9-643b6af75163/volumes/kubernetes.io~secret/signing-key major:0 minor:379 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf/volumes/kubernetes.io~projected/kube-api-access-fvxjl:{mountpoint:/var/lib/kubelet/pods/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf/volumes/kubernetes.io~projected/kube-api-access-fvxjl major:0 minor:572 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~projected/kube-api-access-d5v7l:{mountpoint:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~projected/kube-api-access-d5v7l major:0 minor:253 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~projected/kube-api-access-w4sfm:{mountpoint:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~projected/kube-api-access-w4sfm major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/kube-api-access-8qqcw:{mountpoint:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/kube-api-access-8qqcw major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~secret/metrics-tls major:0 minor:437 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/22ff82cf-0d7d-4955-9b7c-97757acbc021/volumes/kubernetes.io~projected/kube-api-access-sglvd:{mountpoint:/var/lib/kubelet/pods/22ff82cf-0d7d-4955-9b7c-97757acbc021/volumes/kubernetes.io~projected/kube-api-access-sglvd major:0 minor:104 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~projected/kube-api-access-q7gdm:{mountpoint:/var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~projected/kube-api-access-q7gdm major:0 minor:257 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:546 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes/kubernetes.io~projected/kube-api-access-92pwh:{mountpoint:/var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes/kubernetes.io~projected/kube-api-access-92pwh major:0 minor:773 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes/kubernetes.io~secret/serving-cert major:0 minor:734 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d125bc5-08ce-434a-bde7-0ba8fc0169ea/volumes/kubernetes.io~projected/kube-api-access-hmb9v:{mountpoint:/var/lib/kubelet/pods/2d125bc5-08ce-434a-bde7-0ba8fc0169ea/volumes/kubernetes.io~projected/kube-api-access-hmb9v major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d125bc5-08ce-434a-bde7-0ba8fc0169ea/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/2d125bc5-08ce-434a-bde7-0ba8fc0169ea/volumes/kubernetes.io~secret/cert major:0 minor:784 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~projected/kube-api-access-rgl8m:{mountpoint:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~projected/kube-api-access-rgl8m major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~secret/serving-cert major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~secret/serving-cert major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~projected/kube-api-access-wpr8b:{mountpoint:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~projected/kube-api-access-wpr8b major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41253bde-5d09-4ff0-8e7c-4a21fe2b7106/volumes/kubernetes.io~projected/kube-api-access-dbtnq:{mountpoint:/var/lib/kubelet/pods/41253bde-5d09-4ff0-8e7c-4a21fe2b7106/volumes/kubernetes.io~projected/kube-api-access-dbtnq major:0 minor:556 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41253bde-5d09-4ff0-8e7c-4a21fe2b7106/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/41253bde-5d09-4ff0-8e7c-4a21fe2b7106/volumes/kubernetes.io~secret/metrics-tls major:0 minor:578 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes/kubernetes.io~projected/kube-api-access-zpksq:{mountpoint:/var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes/kubernetes.io~projected/kube-api-access-zpksq major:0 minor:735 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes/kubernetes.io~secret/serving-cert major:0 minor:492 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~projected/kube-api-access-hdqzn:{mountpoint:/var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~projected/kube-api-access-hdqzn major:0 minor:1089 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config major:0 minor:1082 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~secret/kube-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~secret/kube-state-metrics-tls major:0 minor:1096 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/45b3c788-eb83-448a-bc60-90b8ace28382/volumes/kubernetes.io~projected/kube-api-access-7pcbj:{mountpoint:/var/lib/kubelet/pods/45b3c788-eb83-448a-bc60-90b8ace28382/volumes/kubernetes.io~projected/kube-api-access-7pcbj major:0 minor:1187 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/volumes/kubernetes.io~projected/kube-api-access-mmk45:{mountpoint:/var/lib/kubelet/pods/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/volumes/kubernetes.io~projected/kube-api-access-mmk45 major:0 minor:1017 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/volumes/kubernetes.io~secret/cert major:0 minor:1014 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f6c819a-5074-4d29-84c8-e187528ad757/volumes/kubernetes.io~projected/kube-api-access-mm9l9:{mountpoint:/var/lib/kubelet/pods/4f6c819a-5074-4d29-84c8-e187528ad757/volumes/kubernetes.io~projected/kube-api-access-mm9l9 major:0 minor:622 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56970553-2ac8-4cb5-a12a-b7c1e777c587/volumes/kubernetes.io~projected/kube-api-access-zrbnx:{mountpoint:/var/lib/kubelet/pods/56970553-2ac8-4cb5-a12a-b7c1e777c587/volumes/kubernetes.io~projected/kube-api-access-zrbnx major:0 minor:732 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56970553-2ac8-4cb5-a12a-b7c1e777c587/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/56970553-2ac8-4cb5-a12a-b7c1e777c587/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:725 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~projected/kube-api-access-2dkgv:{mountpoint:/var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~projected/kube-api-access-2dkgv major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:550 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/kube-api-access-5r8zt:{mountpoint:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/kube-api-access-5r8zt major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:441 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/581a8be2-d16c-4fd8-b051-214bd60a2a91/volumes/kubernetes.io~projected/kube-api-access-ssmph:{mountpoint:/var/lib/kubelet/pods/581a8be2-d16c-4fd8-b051-214bd60a2a91/volumes/kubernetes.io~projected/kube-api-access-ssmph major:0 minor:728 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/581a8be2-d16c-4fd8-b051-214bd60a2a91/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/581a8be2-d16c-4fd8-b051-214bd60a2a91/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:666 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6163bd4b-dc83-4e83-8590-5ac4753bda1c/volumes/kubernetes.io~projected/kube-api-access-zbzl9:{mountpoint:/var/lib/kubelet/pods/6163bd4b-dc83-4e83-8590-5ac4753bda1c/volumes/kubernetes.io~projected/kube-api-access-zbzl9 major:0 minor:897 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6163bd4b-dc83-4e83-8590-5ac4753bda1c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/6163bd4b-dc83-4e83-8590-5ac4753bda1c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:834 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~projected/kube-api-access-lzprw:{mountpoint:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~projected/kube-api-access-lzprw major:0 minor:265 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~secret/serving-cert major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/64d09f81-5fb6-462a-a736-5649779a6b1a/volumes/kubernetes.io~projected/kube-api-access-7w8xs:{mountpoint:/var/lib/kubelet/pods/64d09f81-5fb6-462a-a736-5649779a6b1a/volumes/kubernetes.io~projected/kube-api-access-7w8xs major:0 minor:627 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~projected/kube-api-access major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~projected/kube-api-access-br4bc:{mountpoint:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~projected/kube-api-access-br4bc major:0 minor:489 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/encryption-config major:0 minor:480 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/etcd-client major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/serving-cert major:0 minor:488 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~projected/kube-api-access-v86j8:{mountpoint:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~projected/kube-api-access-v86j8 major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:440 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:442 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d62448d-55f1-4bdc-85aa-09e7bdf766cc/volumes/kubernetes.io~projected/kube-api-access-n9mbs:{mountpoint:/var/lib/kubelet/pods/6d62448d-55f1-4bdc-85aa-09e7bdf766cc/volumes/kubernetes.io~projected/kube-api-access-n9mbs major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d62448d-55f1-4bdc-85aa-09e7bdf766cc/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6d62448d-55f1-4bdc-85aa-09e7bdf766cc/volumes/kubernetes.io~secret/serving-cert major:0 minor:813 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~projected/kube-api-access major:0 minor:272 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~projected/kube-api-access-f92mb:{mountpoint:/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~projected/kube-api-access-f92mb major:0 minor:273 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~secret/webhook-certs major:0 minor:548 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7949621e-4da6-4e43-a1f3-2ef303bf6aa6/volumes/kubernetes.io~projected/kube-api-access-j5hsj:{mountpoint:/var/lib/kubelet/pods/7949621e-4da6-4e43-a1f3-2ef303bf6aa6/volumes/kubernetes.io~projected/kube-api-access-j5hsj major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~projected/kube-api Mar 20 08:49:49.972403 master-0 kubenswrapper[27820]: -access-55l9j:{mountpoint:/var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~projected/kube-api-access-55l9j major:0 minor:254 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~secret/srv-cert major:0 minor:545 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80ddf0a4-e853-4de0-b540-81144dfdd31d/volumes/kubernetes.io~projected/kube-api-access-pgffp:{mountpoint:/var/lib/kubelet/pods/80ddf0a4-e853-4de0-b540-81144dfdd31d/volumes/kubernetes.io~projected/kube-api-access-pgffp major:0 minor:803 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80ddf0a4-e853-4de0-b540-81144dfdd31d/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/80ddf0a4-e853-4de0-b540-81144dfdd31d/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:785 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/890a6c24-1dbb-4331-952b-5712ac00788e/volumes/kubernetes.io~projected/kube-api-access-7bxn6:{mountpoint:/var/lib/kubelet/pods/890a6c24-1dbb-4331-952b-5712ac00788e/volumes/kubernetes.io~projected/kube-api-access-7bxn6 major:0 minor:349 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~projected/kube-api-access-hnk9k:{mountpoint:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~projected/kube-api-access-hnk9k major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9635cdae-0983-4c97-b3ed-dc7a785b1bb6/volumes/kubernetes.io~projected/kube-api-access-zmssd:{mountpoint:/var/lib/kubelet/pods/9635cdae-0983-4c97-b3ed-dc7a785b1bb6/volumes/kubernetes.io~projected/kube-api-access-zmssd major:0 minor:405 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~empty-dir/tmp major:0 minor:532 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~projected/kube-api-access-w5wnd:{mountpoint:/var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~projected/kube-api-access-w5wnd major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~projected/kube-api-access-swxwt:{mountpoint:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~projected/kube-api-access-swxwt major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~projected/kube-api-access-9j527:{mountpoint:/var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~projected/kube-api-access-9j527 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~secret/srv-cert major:0 minor:541 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~projected/kube-api-access-8jmlf:{mountpoint:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~projected/kube-api-access-8jmlf major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/volumes/kubernetes.io~projected/kube-api-access-rqgkl:{mountpoint:/var/lib/kubelet/pods/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/volumes/kubernetes.io~projected/kube-api-access-rqgkl major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~projected/kube-api-access-btwhr:{mountpoint:/var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~projected/kube-api-access-btwhr major:0 minor:1057 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~secret/certs:{mountpoint:/var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~secret/certs major:0 minor:1048 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~secret/node-bootstrap-token:{mountpoint:/var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~secret/node-bootstrap-token major:0 minor:1049 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/volumes/kubernetes.io~projected/kube-api-access-x82xz:{mountpoint:/var/lib/kubelet/pods/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/volumes/kubernetes.io~projected/kube-api-access-x82xz major:0 minor:796 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:779 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a88b1c81-02b5-4c85-9660-5f84c900a946/volumes/kubernetes.io~projected/kube-api-access-5zf6h:{mountpoint:/var/lib/kubelet/pods/a88b1c81-02b5-4c85-9660-5f84c900a946/volumes/kubernetes.io~projected/kube-api-access-5zf6h major:0 minor:1201 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a88b1c81-02b5-4c85-9660-5f84c900a946/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/a88b1c81-02b5-4c85-9660-5f84c900a946/volumes/kubernetes.io~secret/webhook-certs major:0 minor:1197 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/acbaba45-12d9-40b9-818c-4b091d7929b1/volumes/kubernetes.io~projected/kube-api-access-kcgqr:{mountpoint:/var/lib/kubelet/pods/acbaba45-12d9-40b9-818c-4b091d7929b1/volumes/kubernetes.io~projected/kube-api-access-kcgqr major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b097596e-79e1-44d1-be8a-96340042a041/volumes/kubernetes.io~projected/kube-api-access-dx99f:{mountpoint:/var/lib/kubelet/pods/b097596e-79e1-44d1-be8a-96340042a041/volumes/kubernetes.io~projected/kube-api-access-dx99f major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc/volumes/kubernetes.io~projected/kube-api-access-4vm9c:{mountpoint:/var/lib/kubelet/pods/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc/volumes/kubernetes.io~projected/kube-api-access-4vm9c major:0 minor:638 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6610936-e14a-4532-955c-ea1ee4222259/volumes/kubernetes.io~projected/kube-api-access-v8plf:{mountpoint:/var/lib/kubelet/pods/b6610936-e14a-4532-955c-ea1ee4222259/volumes/kubernetes.io~projected/kube-api-access-v8plf major:0 minor:828 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6610936-e14a-4532-955c-ea1ee4222259/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/b6610936-e14a-4532-955c-ea1ee4222259/volumes/kubernetes.io~secret/proxy-tls major:0 minor:824 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/volumes/kubernetes.io~projected/ca-certs major:0 minor:623 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/volumes/kubernetes.io~projected/kube-api-access-l4w7k:{mountpoint:/var/lib/kubelet/pods/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/volumes/kubernetes.io~projected/kube-api-access-l4w7k major:0 minor:624 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bca4cc7c-839d-4877-b0aa-c07607fea404/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/bca4cc7c-839d-4877-b0aa-c07607fea404/volumes/kubernetes.io~projected/kube-api-access major:0 minor:464 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bca4cc7c-839d-4877-b0aa-c07607fea404/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/bca4cc7c-839d-4877-b0aa-c07607fea404/volumes/kubernetes.io~secret/serving-cert major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~projected/kube-api-access-jxqp4:{mountpoint:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~projected/kube-api-access-jxqp4 major:0 minor:419 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/encryption-config major:0 minor:417 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/etcd-client major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/serving-cert major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca6e644f-c53b-41dd-a16f-9fb9997533dd/volumes/kubernetes.io~projected/kube-api-access-nf5kc:{mountpoint:/var/lib/kubelet/pods/ca6e644f-c53b-41dd-a16f-9fb9997533dd/volumes/kubernetes.io~projected/kube-api-access-nf5kc major:0 minor:308 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~projected/kube-api-access-s2j6m:{mountpoint:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~projected/kube-api-access-s2j6m major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~projected/kube-api-access-tfgfz:{mountpoint:/var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~projected/kube-api-access-tfgfz major:0 minor:1091 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config:{mountpoint:/var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config major:0 minor:1086 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~secret/openshift-state-metrics-tls:{mountpoint:/var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~secret/openshift-state-metrics-tls major:0 minor:1087 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~projected/kube-api-access-pw6sv:{mountpoint:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~projected/kube-api-access-pw6sv major:0 minor:1013 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/default-certificate major:0 minor:1010 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1005 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/stats-auth major:0 minor:1011 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~projected/kube-api-access-z5kbh:{mountpoint:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~projected/kube-api-access-z5kbh major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9c0293a-5340-4ebe-bc8f-43e78ba9f280/volumes/kubernetes.io~projected/kube-api-access-ns97v:{mountpoint:/var/lib/kubelet/pods/e9c0293a-5340-4ebe-bc8f-43e78ba9f280/volumes/kubernetes.io~projected/kube-api-access-ns97v major:0 minor:729 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e9c0293a-5340-4ebe-bc8f-43e78ba9f280/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/e9c0293a-5340-4ebe-bc8f-43e78ba9f280/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:707 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~projected/kube-api-access-56bt6:{mountpoint:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~projected/kube-api-access-56bt6 major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~secret/cert major:0 minor:547 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:438 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6a6e991-c861-48f5-bfde-78762a037343/volumes/kubernetes.io~projected/kube-api-access-rf9kc:{mountpoint:/var/lib/kubelet/pods/f6a6e991-c861-48f5-bfde-78762a037343/volumes/kubernetes.io~projected/kube-api-access-rf9kc major:0 minor:992 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f6a6e991-c861-48f5-bfde-78762a037343/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/f6a6e991-c861-48f5-bfde-78762a037343/volumes/kubernetes.io~secret/proxy-tls major:0 minor:988 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~projected/kube-api-access-tqmzh:{mountpoint:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~projected/kube-api-access-tqmzh major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/etcd-client major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/serving-cert major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~projected/kube-api-access-zsj2w:{mountpoint:/var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~projected/kube-api-access-zsj2w major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~secret/metrics-tls major:0 minor:436 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/de4a0233af84aa1e0ede8636890c8f70629f86cf172e50bfad96ee2635973d21/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-1024:{mountpoint:/var/lib/containers/storage/overlay/e5da74bd206d1991677e0c8bce014b71dbfdfeb1f6a16226e92a2f6bed503d0b/merged major:0 minor:1024 fsType:overlay blockSize:0} overlay_0-1026:{mountpoint:/var/lib/containers/storage/overlay/548a9f2ebf8ae6627af3f0311059f0e85e48eb0afae944e8657c6f4a1a74dd91/merged major:0 minor:1026 fsType:overlay blockSize:0} overlay_0-1028:{mountpoint:/var/lib/containers/storage/overlay/b7a876ea26423c2ae405df189dd118d9b41f1d46b5f01d341b879ea3518cb0a6/merged major:0 minor:1028 fsType:overlay blockSize:0} overlay_0-1030:{mountpoint:/var/lib/containers/storage/overlay/5de65dc4557d8176b3d436681bded6e59ed878de21a7a1a197a1023ed3233c78/merged major:0 minor:1030 fsType:overlay blockSize:0} overlay_0-1035:{mountpoint:/var/lib/containers/storage/overlay/9e12fc638f02a27ea275011108e113b5046f41c34bc4093aed802768901ba65f/merged major:0 minor:1035 fsType:overlay blockSize:0} overlay_0-1038:{mountpoint:/var/lib/containers/storage/overlay/d19067d7bf4a0cd6c96a99d16ff7c4c6c0a10c8889e28029512e187366fa5060/merged major:0 minor:1038 fsType:overlay blockSize:0} overlay_0-1040:{mountpoint:/var/lib/containers/storage/overlay/c815e0d93f42ce1b3a5a1c1f25188fdcb9bb1582223aebb7b72be7dd442042a6/merged major:0 minor:1040 fsType:overlay blockSize:0} overlay_0-1055:{mountpoint:/var/lib/containers/storage/overlay/b3565fc50302e34113ad1f8ef1e7e18fb1b9ac61b74bd4fae5846a44cf258ea6/merged major:0 minor:1055 fsType:overlay blockSize:0} overlay_0-1060:{mountpoint:/var/lib/containers/storage/overlay/e7b46cd32dbaef6aab50ca069e7eb7f1cdf68f98827f6af2e90904ae78bf65e6/merged major:0 minor:1060 fsType:overlay blockSize:0} overlay_0-1062:{mountpoint:/var/lib/containers/storage/overlay/8558793a4c8ec99b6433a9f9eecd4e8a90d6569ffdcbbd2895df14da434cbacc/merged major:0 minor:1062 fsType:overlay blockSize:0} overlay_0-107:{mountpoint:/var/lib/containers/storage/overlay/8ad1ebb52e470060df4eb2e06be93eee046d3df3f6be8b115e77fcd336ea9665/merged major:0 minor:107 fsType:overlay blockSize:0} overlay_0-1072:{mountpoint:/var/lib/containers/storage/overlay/79704ee1c85407d8161b2b71c525c10d880787f6cd7cc4bc53aa9f0ea1f2f8a1/merged major:0 minor:1072 fsType:overlay blockSize:0} overlay_0-1074:{mountpoint:/var/lib/containers/storage/overlay/972789e99a244d1cefcddf23690ba2a96feb01f55269af2b7f02ae18f309f005/merged major:0 minor:1074 fsType:overlay blockSize:0} overlay_0-1076:{mountpoint:/var/lib/containers/storage/overlay/9f5af488426d7d7e89c661019b1b4e371359e5369be27329902703b087e10c3f/merged major:0 minor:1076 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/e7d688cdf8bf8366a566ad6944a1bb18859c61204067789e5e63c3c8a40f7aaa/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-1094:{mountpoint:/var/lib/containers/storage/overlay/663cada10c4b6119edef6f6c07bf57a8b424c9f355f40dc1174a305058a242e1/merged major:0 minor:1094 fsType:overlay blockSize:0} overlay_0-1098:{mountpoint:/var/lib/containers/storage/overlay/24afd69f30a501460094d5d840cb348fc1006609a0e0aa32bee43110a5f67329/merged major:0 minor:1098 fsType:overlay blockSize:0} overlay_0-1099:{mountpoint:/var/lib/containers/storage/overlay/0b060f34e59dd10ac5b37c58799d309fc34f928424585ab1fcceadaf3d868310/merged major:0 minor:1099 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/180f40d12a3ee409daf0d3ba8cfcba87f09f82070282d1163b8c0fa27f904d59/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-1113:{mountpoint:/var/lib/containers/storage/overlay/c6d137fe84650e6ffa2b6a415e9975928292473e0448750c11b91ca030301b76/merged major:0 minor:1113 fsType:overlay blockSize:0} overlay_0-1115:{mountpoint:/var/lib/containers/storage/overlay/8ba63ad4f24443e3e04f3cc4bd6f27927420728269e16cae776a6f2a1f3557a2/merged major:0 minor:1115 fsType:overlay blockSize:0} overlay_0-1117:{mountpoint:/var/lib/containers/storage/overlay/b22f5236df25a8a1e3e6730a5f0532b7672de24b040c06798d3cbd5a1dd173d6/merged major:0 minor:1117 fsType:overlay blockSize:0} overlay_0-1119:{mountpoint:/var/lib/containers/storage/overlay/a571da501cda92be2deaf766e9355d7876ad8225543ecc7a21918f95b94872cd/merged major:0 minor:1119 fsType:overlay blockSize:0} overlay_0-1121:{mountpoint:/var/lib/containers/storage/overlay/848a50b928d8b176b3c3dcbe0b1d780886bcbd2e509e8c793cbfc610ee1c7772/merged major:0 minor:1121 fsType:overlay blockSize:0} overlay_0-1123:{mountpoint:/var/lib/containers/storage/overlay/a08dc103dbbe1071cea788b9df1f2f2e5ff187b8e625a0b739a177c839bb7fde/merged major:0 minor:1123 fsType:overlay blockSize:0} overlay_0-1125:{mountpoint:/var/lib/containers/storage/overlay/b00bb7458464cb36a267868ab95069706f3896bb1fcef98c616ad1f70f74210e/merged major:0 minor:1125 fsType:overlay blockSize:0} overlay_0-1139:{mountpoint:/var/lib/containers/storage/overlay/4fa1103d31c0b27d18eac28fd26e0d2c93aab249978b5c4e204b8b1e2131fc18/merged major:0 minor:1139 fsType:overlay blockSize:0} overlay_0-1154:{mountpoint:/var/lib/containers/storage/overlay/9579fae2e0b2bf5557dee66d6003d52bf76442f68242191cb128afb9cf1dfa98/merged major:0 minor:1154 fsType:overlay blockSize:0} overlay_0-1156:{mountpoint:/var/lib/containers/storage/overlay/383bb7cd145abcc03bc7b4a4845ed4dfa5f05cfe624c45e637f57ca86134baa8/merged major:0 minor:1156 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/18dfe62d17e41a682ad0b052e81c8f91a3fb6e12404574d670aac58d8f019f75/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-1193:{mountpoint:/var/lib/containers/storage/overlay/92c81203fbf95553329cc31433e99e11751dc30ab4806298ae2760119de27bfb/merged major:0 minor:1193 fsType:overlay blockSize:0} overlay_0-1204:{mountpoint:/var/lib/containers/storage/overlay/3bd5c21cc9e226c6b38240b730ef46f372f781b30d6926cfddef4bf117d92304/merged major:0 minor:1204 fsType:overlay blockSize:0} overlay_0-1206:{mountpoint:/var/lib/containers/storage/overlay/b0eb443fe86ead5c4541d86197d12d9bf3a20d1f54dd1612ab3cfd09b0f96741/merged major:0 minor:1206 fsType:overlay blockSize:0} overlay_0-1208:{mountpoint:/var/lib/containers/storage/overlay/44c56a8d2e26e24b4b7c0ca17c417c203c5f65159d71bede59f365688b475348/merged major:0 minor:1208 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/92836eff21952cfed3970addb5e7acbb1572337356ebdf5e162d7924f6e52027/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/cb680a9525d5d7e6878d23595dccf6b46ca78ee2df069ba18ce495468eb99aab/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/79f0a3f73d848bccac450c572bb5dda831a9821cd2d0eeda80b099e59bb0ddfa/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/ae0e924aa9f07ccac185c06f6f5249b0c8fb3861fe7eab8df6265e3978c2449f/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-147:{mountpoint:/var/lib/containers/storage/overlay/421fb9b0ddf531ee16fab92031a796160bcd8f2f13d72ec92ec7b45097a66725/merged major:0 minor:147 fsType:overlay blockSize:0} overlay_0-149:{mountpoint:/var/lib/containers/storage/overlay/1c11298b137c3b36ba72f433256cdf72277b19c9f495deb91a8aab85dfa812aa/merged major:0 minor:149 fsType:overlay blockSize:0} overlay_0-155:{mountpoint:/var/lib/containers/storage/overlay/e5774d94b2429ec999da3ad741c212b027ed5a184e0620c6660d92717ac6faf0/merged major:0 minor:155 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/93cb851bdaf25b1c13607e7fd926ecb402a7d8856f246e6f4cb38ddacb2a28e9/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/6f28bf396af4320f77ab4d3644b0f9f235f78e20488cc4f589761c12ab54d22b/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-161:{mountpoint:/var/lib/containers/storage/overlay/e2d4c8a211c0af748d7533d0bb9006fd2e9d1531e3f371e1a7d1ee28329d7e4d/merged major:0 minor:161 fsType:overlay blockSize:0} overlay_0-164:{mountpoint:/var/lib/containers/storage/overlay/66d85fd1207b9ab83d4b4a1d93ad14008fc147614a63bd5edfc3af857c022c19/merged major:0 minor:164 fsType:overlay blockSize:0} overlay_0-169:{mountpoint:/var/lib/containers/storage/overlay/c9b328dd9a39b700fb37a7131e2cc35b38fb02ce413dea25004952b14cfa8599/merged major:0 minor:169 fsType:overlay blockSize:0} overlay_0-174:{mountpoint:/var/lib/containers/storage/overlay/dd0c4b75690c4cd9db440d21e91401e1cd14ee1e9ec83bc1f18f2cf411b7ff62/merged major:0 minor:174 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/82bcc607371b6f0ac7380f0c3bb8f27ddc1e90103ab5ddaa515003b18e28a3a8/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-183:{mountpoint:/var/lib/containers/storage/overlay/bdf2c0c59098bd49f427527f2b48088d52501a779c7b844da16b054b3cbc5f39/merged major:0 minor:183 fsType:overlay blockSize:0} overlay_0-186:{mountpoint:/var/lib/containers/storage/overlay/051e69eca58b28c522c759a9a8a38122e96d0ca4c35a7549b3c62fd612c192a6/merged major:0 minor:186 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/df47865326cc5105003e95a46aac7446117b038aece9c8bd312c0b7c51a394ea/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/04ee7495fc18803cc2a8141c5658e977d5bc1dd07430525962b5411f755006df/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-255:{mountpoint:/var/lib/containers/storage/overlay/6c015b0c9e7a94df4ed2231658793762d8ac9e7ee13d85a79ac72697f3a12b66/merged major:0 minor:255 fsType:overlay blockSize:0} overlay_0-258:{mountpoint:/var/lib/containers/storage/overlay/4cab8d08bd84b7691dbe17f6512155bfdf608cd3f9090fef7e075b5a2db2c7e7/merged major:0 minor:258 fsType:overlay blockSize:0} overlay_0-268:{mountpoint:/var/lib/containers/storage/overlay/b6466fdddbc8c9e9160e6e4ab7e81be580b99b396679e15000e7b35e81dd98f0/merged major:0 minor:268 fsType:overlay blockSize:0} overlay_0-274:{mountpoint:/var/lib/containers/storage/overlay/2153f917a9bd9665ae3cf272e861561b9af05475bf9d15e26c210a27b42a4fda/merged major:0 minor:274 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/77bbc818962c576b50add13477beb8eb9f274d752e9878166ef4b924039a38a3/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-282:{mountpoint:/var/lib/containers/storage/overlay/121b7a8b5d77a08b8b00c2a2ef5dfd74e7e5d2d58eabf9216210bd5d36197760/merged major:0 minor:282 fsType:overlay blockSize:0} overlay_0-286:{mountpoint:/var/lib/containers/storage/overlay/80c38336cfd12a9baa19a364029e07e08a1637a5a1faa3900f03c0a1840f6889/merged major:0 minor:286 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/2d101ba379373ccba4495608e3b277b03428afb12d03c55bbc2f10b348994193/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/0e34a30612ed394d27e5b5dcf8b5f61310f0b2c337902a172c154cb50d8b6cb5/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-294:{mountpoint:/var/lib/containers/storage/overlay/3ab4a203aafd1afafdd80d03655ec9a9b817a8408bf459dc0af396e9a7a21a0d/merged major:0 minor:294 fsType:overlay blockSize:0} overlay_0-298:{mountpoint:/var/lib/containers/storage/overlay/586d58267a039be6c67e94306ecc88995270cb42117f24cbca1a34d17a7509fd/merged major:0 minor:298 fsType:overlay blockSize:0} overlay_0-300:{mountpoint:/var/lib/containers/storage/overlay/bea7e69d07600b0691c31b16a9bf4cb6a460bae48b1b4ff092fa1d35f971593c/merged major:0 minor:300 fsType:overlay blockSize:0} overlay_0-302:{mountpoint:/var/lib/containers/storage/overlay/8b88ed77b4c2badc1bc7b366e99b7f076c5de8f2798d00566d76c631de3f02cf/merged major:0 minor:302 fsType:overlay blockSize:0} overlay_0-304:{mountpoint:/var/lib/containers/storage/overlay/5abff4507f27ae2b0075e11e03b0bcf1824441620372f2e2ecde998480c88ba3/merged major:0 minor:304 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/6eab6c0f8bd9f9219d670b47b645f5cd92e4ec3f51977d458a443c9f1f51e997/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/67bc4c638d0090486a70ed3c949a73289d71c54d24a0a58793d7f84a76e06cdd/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/3154962d270ef6142e8a28c5f90dfbd78268c7849bbe255b869cc413f6b3d13a/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/700b67852063f002896680218f8f92ebdba259144c29497e0bd564bc92bae775/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/80e4ba41cc00b446a94802b6dd5fa3ce458f5bd2aa95a38bc6f91f80387b589e/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-331:{mountpoint:/var/lib/containers/storage/overlay/ff3003a0b407a86d58571e4f500e32bd280dc6162cb187b8744cdf2f1b235eec/merged major:0 minor:331 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/9e697c969e69a3004a572fd7927e1f062e995a1af945d01462e2e0e537d55d07/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-335:{mountpoint:/var/lib/containers/storage/overlay/75f6dfca1e24d24220412e427052a6a912d64a28055c37eff6df4c45deeae368/merged major:0 minor:335 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/b652e77c496c9d45f2467e33378733fed14449f60910ed47c70132212477d2fc/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/def0c449cf227b2b88570a05f4481f41b6e56b0b2f626084fe9f0dee0cf03391/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-346:{mountpoint:/var/lib/containers/storage/overlay/b2dc9e04d4ec4109f9f9188466e0439e1d30029c02505baa16b9331d655caed7/merged major:0 minor:346 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/92cb212814029dbd8947b0bcc4a1bf2595b6030447090d924d54f0d550766c00/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-357:{mountpoint:/var/lib/containers/storage/overlay/30bf0f3ab297802ffb74d14c1a97cc1392caadbdd149cc04f27b4d6abdb6bc5b/merged major:0 minor:357 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/601ccf12ff624e1dd45115779d1d98e2724ca2a552964eb9f8ea487e51524fc1/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-364:{mountpoint:/var/lib/containers/storage/overlay/becefb50d32bae6ceafcb17fc449d95ac35dd563f2697a733103122d1dfab589/merged major:0 minor:364 fsType:overlay blockSize:0} overlay_0-368:{mountpoint:/var/lib/containers/storage/overlay/e07264c8db259f0fe53b49d0b18643f119d9d9abe28fa4e1efd394efb61ed8fa/merged major:0 minor:368 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/d17ce3f7ef90640d54350e6576081cb75facd593e56fd8d87b79162b04219e51/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/3a0bf3fb0dcd3be256a6678f9375e7ae32d4cd773167cca4f48a7d4e24ba3152/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/7ad24b06d1eb4e84350d620d47d2d14d8f8cd5edc8d3ce6b36397594833c438c/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-389:{mountpoint:/var/lib/containers/storage/overlay/0de8a7bfa309b2abfe7207f7a9a1bdd9feb7bfe931902e49547204c7387be521/merged major:0 minor:389 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/fa34277a81f04d14eeec5cb7b686e7145b2cf424a5e7d1f0d8aec57e57c61791/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-393:{mountpoint:/var/lib/containers/storage/overlay/0827eb0ee19a3f380f271a6d32219fe59be09bd4062ea5283de01908634bdd7c/merged major:0 minor:393 fsType:overlay blockSize:0} overlay_0-395:{mountpoint:/var/lib/containers/storage/overlay/0a65066f63cabf584d42c7c363d459a21d99ace65ef6e93d506c588f18bb94db/merged major:0 minor:395 fsType:overlay blockSize:0} overlay_0-401:{mountpoint:/var/lib/containers/storage/overlay/9dbaff11e8a8c959ddffe8c025e6fa68cbaf3e23c2a9ffe30d76ceef9994971c/merged major:0 minor:401 fsType:overlay blockSize:0} overlay_0-409:{mountpoint:/var/lib/containers/storage/overlay/9381491f3f2c35642a55858d593cb67446796838e5806c61c5d3c6aad4cbc50c/merged major:0 minor:409 fsType:overlay blockSize:0} overlay_0-410:{mountpoint:/var/lib/containers/storage/overlay/30cc84fecdc3508d8df219428714bf14d9ed301d30ffb564a4e07739a287429b/merged major:0 minor:410 fsType:overlay blockSize:0} overlay_0-421:{mountpoint:/var/lib/containers/storage/overlay/e23b971a29b90c45af789c2295dbc64d19a1ee999d9702250947852cdf0dfe07/merged major:0 minor:421 fsType:overlay blockSize:0} overlay_0-453:{mountpoint:/var/lib/containers/storage/overlay/846ee263e6367cafccc9396206c941ded30b48ad8131ab186069b734c4741554/merged major:0 minor:453 fsType:overlay blockSize:0} overlay_0-455:{mountpoint:/var/lib/containers/storage/overlay/c3bf91bcc013024fba52a02abd3da5471e87e290f5ebf5b233560d346fbb47c0/merged major:0 minor:455 fsType:overlay blockSize:0} overlay_0-457:{mountpoint:/var/lib/containers/storage/overlay/c778f45ba11a58a82c5fb1cd4bb91b0d0bb02dc908c604e4847266dfd7dc77d4/merged major:0 minor:457 fsType:overlay blockSize:0} overlay_0-459:{mountpoint:/var/lib/containers/storage/overlay/169bb3b601e7c9b2991a7719734be8b078f163da4b3cc4f5301048385b4347a1/merged major:0 minor:459 fsType:overlay blockSize:0} overlay_0-46:{mountpoint:/var/lib/containers/storage/overlay/69288cc4263b6395c99b75fca9502baf71f48ab9e567c2a0c51891a794b2936b/merged major:0 minor:46 fsType:overlay blockSize:0} overlay_0-461:{mountpoint:/var/lib/containers/storage/overlay/929869633b6aa5ea6547480f7ed9494acd24be65959f79c4d8e422fd1689d2a2/merged major:0 minor:461 fsType:overlay blockSize:0} overlay_0-465:{mountpoint:/var/lib/containers/storage/overlay/dd9974b80f6ba5ed2e2eb35d6c083dacdd94f1653a7368733e0d55cad31e7c5a/merged major:0 minor:465 fsType:overlay blockSize:0} overlay_0-467:{mountpoint:/var/lib/containers/storage/overlay/d7461f54175f7175b846135f13b5e74061994649bdac64ab3110adaa5d78e192/merged major:0 minor:467 fsType:overlay blockSize:0} overlay_0-469:{mountpoint:/var/lib/containers/storage/overlay/51c66bf10567b5b06aac4db306db4b5b54cbf8f29045b02279ba46dd7dbe1fe1/merged major:0 minor:469 fsType:overlay blockSize:0} overlay_0-471:{mountpoint:/var/lib/containers/storage/overlay/001f0ad1792a87b4ed8d109eeb283a39a5afb9c67687e6a515bfc6e6393a8361/merged major:0 minor:471 fsType:overlay blockSize:0} overlay_0-479:{mountpoint:/var/lib/containers/storage/overlay/51f93604ceb3fa2d84153ab26ccd8e5d02cc2b98274f007b4bb00df75d79f714/merged major:0 minor:479 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/7dbd676e00873d18f343c82fe791e3ed2ef4b4bd5ab294b394cd8ce6848d5ac2/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-487:{mountpoint:/var/lib/containers/storage/overlay/5923c0402c19a6678fc6522ee0d5412d7b4266331c09d952585f94b334d2d1cc/merged major:0 minor:487 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/91a930e3a8d87bbeec69993278c0bc6ce522c2397bd8a08149bcaa6202080409/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-499:{mountpoint:/var/lib/containers/storage/overlay/0d3a5cf282376b44e9e214da63d40b23d86ea126eabc3bb5b05c8347fa92f0ff/merged major:0 minor:499 fsType:overlay blockSize:0} overlay_0-503:{mountpoint:/var/lib/containers/storage/overlay/cf2eb0219af8de97adb5f64a475a1c21bc756cda67dc16851e581dd041e1b024/merged major:0 minor:503 fsType:overlay blockSize:0} overlay_0-525:{mountpoint:/var/lib/containers/storage/overlay/ebc2b31a33c114fcaa48fe4c9303cbe2666448f0d63322f84f515156e5d2f602/merged major:0 minor:525 fsType:overlay blockSize:0} overlay_0-537:{mountpoint:/var/lib/containers/storage/overlay/cb1fd50eefeb4f689bfaf6c520e876a21d0ac1c6ae0ee3d9faed03e0c625de1b/merged major:0 minor:537 fsType:overlay blockSize:0} overlay_0-539:{mountpoint:/var/lib/containers/storage/overlay/fd420e4c23f64ee48543aeeb8c2dabf82ddc1d5b783db575ada05c29b279fb9f/merged major:0 minor:539 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/fed59e5e2e1d08f5cc66749cf128a6d4c259192360a08e8cbe2d3056875ca93e/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-573:{mountpoint:/var/lib/containers/storage/overlay/1b3be0bd2386ee8e2e6f7746e0508ae6a11fca944f88cd55e62a664ce104b651/merged major:0 minor:573 fsType:overlay blockSize:0} overlay_0-575:{mountpoint:/var/lib/containers/storage/overlay/819b1057068e90cb68a2cadcf395370acd5311205d8c79520fe7c682002b2119/merged major:0 minor:575 fsType:overlay blockSize:0} overlay_0-577:{mountpoint:/var/lib/containers/storage/overlay/293de79e9797637a927058c778b7f2f18c776a0d6b9d5ecf2c19e18233f562d9/merged major:0 minor:577 fsType:overlay blockSize:0} overlay_0-58:{mountpoint:/var/lib/containers/storage/overlay/49303d75d19e142369300c0800647a409fa8cfbadc6c7086a0a93692b51f67b1/merged major:0 minor:58 fsType:overlay blockSize:0} overlay_0-585:{mountpoint:/var/lib/containers/storage/overlay/c19e75b1f3f17b77f2e061517a44958f1aa5ff5761f2d2c6bd8c729c94df976a/merged major:0 minor:585 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/ac3cccd0229458829955190d28fc647ffe0f3bf48c98e4f7a7abc1ed95e9f7bc/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-590:{mountpoint:/var/lib/containers/storage/overlay/96836a83ba59df444c6a68620b1b2eecfa8bb35f98a9fbbc72b8a92800fc6775/merged major:0 minor:590 fsType:overlay blockSize:0} overlay_0-594:{mountpoint:/var/lib/containers/storage/overlay/3e62b44c0d2690d7cc16b615ca475edf36d8a12bd20090b004cedf50aa8becef/merged major:0 minor:594 fsType:overlay blockSize:0} overlay_0-598:{mountpoint:/var/lib/containers/storage/overlay/b30f68f97eb2ecbcd20285ce65ac95fc2b1f5a9c035a9ec2e61a94e46e22bb6c/merged major:0 minor:598 fsType:overlay blockSize:0} overlay_0-600:{mountpoint:/var/lib/containers/storage/overlay/ec68e71159dbcccf3104c59b7a3b235669fead39454dbcee9561ae5d7ca610ab/merged major:0 minor:600 fsType:overlay blockSize:0} overlay_0-604:{mountpoint:/var/lib/containers/storage/overlay/4a068be8cc20f0954e64718547dbd988ac39e4ad19c73ad570941bb60ee93dbf/merged major:0 minor:604 fsType:overlay blockSize:0} overlay_0-606:{mountpoint:/var/lib/containers/storage/overlay/999e56c60c7a1da7136294ec87627ac72a0ff5d0f462b9b24987f358d89d0930/merged major:0 minor:606 fsType:overlay blockSize:0} overlay_0-608:{mountpoint:/var/lib/containers/storage/overlay/5a144daaa98746dcd7590f7305ea50b2f2845a4ce67f201ff3831e0a6a62f1f0/merged major:0 minor:608 fsType:overlay blockSize:0} overlay_0-61:{mountpoint:/var/lib/containers/storage/overlay/363e6797892dc77469a4e685603a5b051dd20705444e424e909a37bfc0a766cc/merged major:0 minor:61 fsType:overlay blockSize:0} overlay_0-612:{mountpoint:/var/lib/containers/storage/overlay/ea0f8d07580f4eefe059f2a10b17d66c07680b11d93df46cd3af7402e82d03dd/merged major:0 minor:612 fsType:overlay blockSize:0} overlay_0-63:{mountpoint:/var/lib/containers/storage/overlay/a9ca7982d4c6ed785ef924adf7e0cc1328094569e0ccb22082cbc17ea4d9799c/merged major:0 minor:63 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/c72248915e418eb6eb7e0d5de084c5299cb0ed5c0ea84feaa45693a1e4867f43/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-636:{mountpoint:/var/lib/containers/storage/overlay/ce556a305e4d7b48ad7e7c0ccadc1a2b3834ced42504f19980f1affaea997708/merged major:0 minor:636 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/635a49ca13e6fa2467ca04d7c2fa2aedb2d69ef9b5301048ed8884f1bb90eaad/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-652:{mountpoint:/var/lib/containers/storage/overlay/b5853937c8f66817e22dfed3d0b93137c1d54ecff776f970576452cdaa107647/merged major:0 minor:652 fsType:overlay blockSize:0} overlay_0-653:{mountpoint:/var/lib/containers/storage/overlay/5c5d7a8bfa6f95c2ac276c4092a6119e771664ed3c84d707ad55814a45bb2595/merged major:0 minor:653 fsType:overlay blockSize:0} overlay_0-655:{mountpoint:/var/lib/containers/storage/overlay/cf64afa8c99cfd4812d617bb1c55d740a4f9f772339596d62bab43d7ccba1c70/merged major:0 minor:655 fsType:overlay blockSize:0} overlay_0-670:{mountpoint:/var/lib/containers/storage/overlay/a78e939ff4a4ddfe44f82e9b7ecb05f23ce8f1119feaa0f84a915b60682dcf5b/merged major:0 minor:670 fsType:overlay blockSize:0} overlay_0-687:{mountpoint:/var/lib/containers/storage/overlay/2795c445ebcac59d653f9735d869a9de7e9587af4700cf7c0bc8aa13d41a7da4/merged major:0 minor:687 fsType:overlay blockSize:0} overlay_0-688:{mountpoint:/var/lib/containers/storage/overlay/946616ec6db19564ce5fbe53bac66d65289c1f12162ceb92701249322cb09cae/merged major:0 minor:688 fsType:overlay blockSize:0} overlay_0-689:{mountpoint:/var/lib/containers/storage/overlay/140e05ebab4c9e212c75b6e116c11683dce088f6e1bce517f3a09639c34e6fdd/merged major:0 minor:689 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/50005279639afa4532faecb7c74b59ef2d1e302d47843e4895a998a7ed415af3/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-696:{mountpoint:/var/lib/containers/storage/overlay/e52b68c2afcb8eb48f53f598c636f5b9eab0470625ec327f8fa1a7ea51ad2972/merged major:0 minor:696 fsType:overlay blockSize:0} overlay_0-697:{mountpoint:/var/lib/containers/storage/overlay/deda4b04047ed0ee4d56ca824bb9a0e4e374f58513216a242e38045c050aef31/merged major:0 minor:697 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/8211242dfc213ba270d21e4a96b89cfa51e0e89ad8f96f3dcad980a843865022/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/31f0827fa1aff59d15f6c5baa792864bbcf9db2092d22018d4b47c26bc038df5/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-719:{mountpoint:/var/lib/containers/storage/overlay/157e4868e1d46280ec6a65fe5f3f6cb97d2bf587639d8986c8b2d1883113c0fd/merged major:0 minor:719 fsType:overlay blockSize:0} overlay_0-723:{mountpoint:/var/lib/containers/storage/overlay/cfeef6ba2773ce73fc48a229fdc76fa2e557d28953c78ec1f0b78d454796738c/merged major:0 minor:723 fsType:overlay blockSize:0} overlay_0-73:{mountpoint:/var/lib/containers/storage/overlay/7c1487711ff84005b6a3ee17d1830d2a4c802beeab99988fea54c83cb0b275d3/merged major:0 minor:73 fsType:overlay blockSize:0} overlay_0-733:{mountpoint:/var/lib/containers/storage/overlay/b47e3207579fa3942edf61ad5ddecaae3e050625765c18d91e32f24fef7a1348/merged major:0 minor:733 fsType:overlay blockSize:0} overlay_0-737:{mountpoint:/var/lib/containers/storage/overlay/25c801baa2e52e5f1968bb3ce7054a69461c7da8fac90683154ad4618188fbbe/merged major:0 minor:737 fsType:overlay blockSize:0} overlay_0-75:{mountpoint:/var/lib/containers/storage/overlay/65a910b23613733913869ab3c12b7f9ec38b2ada3e9abd3c7816b8670fcd0792/merged major:0 minor:75 fsType:overlay blockSize:0} overlay_0-751:{mountpoint:/var/lib/containers/storage/overlay/1779b715deb60596080eed1f82697cd8563f6bbc38d9fe176bdd967d59fe525c/merged major:0 minor:751 fsType:overlay blockSize:0} overlay_0-752:{mountpoint:/var/lib/containers/storage/overlay/795a4d6bb877c7b5b360725c9270e996f700783d4680041554a88162eb28dbec/merged major:0 minor:752 fsType:overlay blockSize:0} overlay_0-755:{mountpoint:/var/lib/containers/storage/overlay/156b0f726d1524d21b005594fd6183c091911f5b1579603a7c5b9d47f9954478/merged major:0 minor:755 fsType:overlay blockSize:0} overlay_0-760:{mountpoint:/var/lib/containers/storage/overlay/bd1622b6d040100662bb8ea904e807421f6e7cd4336381063d87dc3a7daa64af/merged major:0 minor:760 fsType:overlay blockSize:0} overlay_0-768:{mountpoint:/var/lib/containers/storage/overlay/239ead19f81ecb42907894e96bf83b5eb10ca73f6d6c421ccebe650923315da3/merged major:0 minor:768 fsType:overlay blockSize:0} overlay_0-77:{mountpoint:/var/lib/containers/storage/overlay/1b3d233d91f0a8be8f1cbd3e7e6547ae4883f39dc5269e124a696a0fb9c5c751/merged major:0 minor:77 fsType:overlay blockSize:0} overlay_0-780:{mountpoint:/var/lib/containers/storage/overlay/ffafc8a26b246a1877c45b8b227fffba2983cfab31a33d47c138afcef480f938/merged major:0 minor:780 fsType:overlay blockSize:0} overlay_0-782:{mountpoint:/var/lib/containers/storage/overlay/a771ef9810a52d4daa14dcd04ec1f7b2682bf22690ce06722afda376d562fb54/merged major:0 minor:782 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/57ebec5d82423129a5e829cb258fda1878ac96f0abd1efc9933a465860703b85/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-790:{mountpoint:/var/lib/containers/storage/overlay/d8a67402b0ababbd4597b310848f1a7d06b29b8493e22c88957d8515aa2f9701/merged major:0 minor:790 fsType:overlay blockSize:0} overlay_0-793:{mountpoint:/var/lib/containers/storage/overlay/14a4308f292ba5938a240989a57a7c1a747af980fdd2d4df6a83b14d95a5fc6e/merged major:0 minor:793 fsType:overlay blockSize:0} overlay_0-799:{mountpoint:/var/lib/containers/storage/overlay/9f6b623a67cf1408fe2e24cef1968bef8d0599d12fe00fcce7d3f43ff9935245/merged major:0 minor:799 fsType:overlay blockSize:0} overlay_0-805:{mountpoint:/var/lib/containers/storage/overlay/3ca5e255e0436e494dacd90660607d95cf6118d781f23dc9099a8afd5dbc01b0/merged major:0 minor:805 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/9083bef9c2198a93fb39b31c7c5dffb6e655e5a76321c11f6f9d1e9885a0f7b0/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-840:{mountpoint:/var/lib/containers/storage/overlay/2e36683b0233b81d188c10c4b01660ddd5dde7d68c7622ff21e0de8c10535fef/merged major:0 minor:840 fsType:overlay blockSize:0} overlay_0-842:{mountpoint:/var/lib/containers/storage/overlay/353d7af417275d0c99415767b72990539cdee8b1d3cf7a192d89b371ef28cd27/merged major:0 minor:842 fsType:overlay blockSize:0} overlay_0-844:{mountpoint:/var/lib/containers/storage/overlay/b57ae4db5e8b76c5aace2ae2ecefad31c36e274e06cfed809116934f82ca5049/merged major:0 minor:844 fsType:overlay blockSize:0} overlay_0-846:{mountpoint:/var/lib/containers/storage/overlay/ff50e943ee6f0ed09af45b5a5c1434f96ddef26c36835068288168350d4bf7a9/merged major:0 minor:846 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/585cc2866bbadad944b210b03e17883a06f64343786c0a59f77e711b0e2c68bc/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/50abd1210178eeff90b5f19c00f22da02d079d47decf399d0cb25e625b6906f7/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-855:{mountpoint:/var/lib/containers/storage/overlay/56150ead15b5d0e5a9c024f6261cfe316ca74efb3338a23462fe26f7ca8feaff/merged major:0 minor:855 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/9c9bef7bfe3c64a25beb98117372504a41d303f6396310b7e17849c72b3a3fec/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/d2112aa213cc09d960b6f37f16ab146ce61f3de6b85e3a84920ba240445015ae/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-862:{mountpoint:/var/lib/containers/storage/overlay/67f3ab2df1d893c0dbb4c2f7471c6231b7c943367e190a70fcd46908c55968b9/merged major:0 minor:862 fsType:overlay blockSize:0} overlay_0-874:{mountpoint:/var/lib/containers/storage/overlay/1d31ae8c26c6c7c6045dfe5fb89bd86d02f3f9c3320d857811f03f78713381cb/merged major:0 minor:874 fsType:overlay blockSize:0} overlay_0-876:{mountpoint:/var/lib/containers/storage/overlay/8b0b41edeb15da5cf04619c0a2dfac27ba2503e3a54e560aca349b7e877d5b3c/merged major:0 minor:876 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/69b510039cecd76b7bfbb47aa6be6 Mar 20 08:49:49.972747 master-0 kubenswrapper[27820]: 7dcafac6313d07a234d136026ea81b8c181/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/1686c9fd9509245efc300cb41d242b461de67c5dc1fc07f916c607010951faa5/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-895:{mountpoint:/var/lib/containers/storage/overlay/c3fa1066a865c45c72ed4f44604f95eb0bfab42d3d4dfdeba5b2a77fc5c61c90/merged major:0 minor:895 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/1c822718bc6b3144e363a5361849c21c69c9565a3517de9491e68f42715452c9/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-905:{mountpoint:/var/lib/containers/storage/overlay/ce82ebde1075fdcdd501f077812b55f7f6bbf3b5c2424081e1a26ac002090872/merged major:0 minor:905 fsType:overlay blockSize:0} overlay_0-908:{mountpoint:/var/lib/containers/storage/overlay/76cd94bc8b3eae4cd153331631e90afe57f43300884dc496793c5c02ffbfd9d3/merged major:0 minor:908 fsType:overlay blockSize:0} overlay_0-911:{mountpoint:/var/lib/containers/storage/overlay/fd62e19edded40881c2c2c56b08e1dd4155a3e1a575bb768c30ab7070ba193dd/merged major:0 minor:911 fsType:overlay blockSize:0} overlay_0-916:{mountpoint:/var/lib/containers/storage/overlay/f1b2963087169044d41cdb51d266a0ec93257fd5fc0223203b092a99cd89ce18/merged major:0 minor:916 fsType:overlay blockSize:0} overlay_0-918:{mountpoint:/var/lib/containers/storage/overlay/5f4c48c7e9fcac37a13ab5075a220557460ff5fbc67b89c1f4eb091dd1d9d077/merged major:0 minor:918 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/f7ef2af9afdafdf8b4fea64f33e2c2746766a022f968c24dc0dd0b8b5aac3d29/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-939:{mountpoint:/var/lib/containers/storage/overlay/4d4c494b1bb976a35c1e43924a9751a32a46beac1d26279d01f5fffae74416fb/merged major:0 minor:939 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/737c6f1c0b1f9937a403d12452c5797423242287732267fc6913edd58d087c92/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-948:{mountpoint:/var/lib/containers/storage/overlay/afb040f3f43aca36efff45cf984a04839104b0982214f873bee8e2bb7295530b/merged major:0 minor:948 fsType:overlay blockSize:0} overlay_0-955:{mountpoint:/var/lib/containers/storage/overlay/761af496dd1fec0f7b23f186aefa852011701cf3d5d1e111c6ddef551cd5ff4f/merged major:0 minor:955 fsType:overlay blockSize:0} overlay_0-957:{mountpoint:/var/lib/containers/storage/overlay/6fde0c56ef6291054a070bc6e17575d5fe7d9b9a713102a40e250846e1232c04/merged major:0 minor:957 fsType:overlay blockSize:0} overlay_0-96:{mountpoint:/var/lib/containers/storage/overlay/cfb2257ccd374dd1e1a6caeceb2be9608968e7b3143c51b9c2a80f020a0799a6/merged major:0 minor:96 fsType:overlay blockSize:0} overlay_0-970:{mountpoint:/var/lib/containers/storage/overlay/6806c365aef8bcdc75879b230d17b7f63d4dd52237765eaf4dd8cf0347768be5/merged major:0 minor:970 fsType:overlay blockSize:0} overlay_0-978:{mountpoint:/var/lib/containers/storage/overlay/30775f0e418a44b9a33af33cc0c11e99924635cc078294c94a62e0b49aefb983/merged major:0 minor:978 fsType:overlay blockSize:0} overlay_0-980:{mountpoint:/var/lib/containers/storage/overlay/b397f379b9e3243fcc89fae697b1926ac4ecc803fb7206d155c7e6cb93981ca9/merged major:0 minor:980 fsType:overlay blockSize:0} overlay_0-984:{mountpoint:/var/lib/containers/storage/overlay/f221cf4737797118aa4b819942463fd8ec9009e5e36d9d454e237109da848852/merged major:0 minor:984 fsType:overlay blockSize:0} overlay_0-99:{mountpoint:/var/lib/containers/storage/overlay/fd3f1e176cf4f9e01ec478d62fc2198ea9e3a18ca2729dc1bef175895c51ded9/merged major:0 minor:99 fsType:overlay blockSize:0} overlay_0-995:{mountpoint:/var/lib/containers/storage/overlay/2d9afe8b5e0e75276627672f335a2318ee64faebbd19f2d7ba9d07e1764e6014/merged major:0 minor:995 fsType:overlay blockSize:0} overlay_0-999:{mountpoint:/var/lib/containers/storage/overlay/0898fc58e9bb3249c338f50b23e1145753ef3cd52ee26eec6c09d0300e1f2561/merged major:0 minor:999 fsType:overlay blockSize:0}] Mar 20 08:49:50.008671 master-0 kubenswrapper[27820]: I0320 08:49:50.004499 27820 manager.go:217] Machine: {Timestamp:2026-03-20 08:49:50.003628827 +0000 UTC m=+0.098837991 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:68fa82f9afdb4f4db7851aefd1680b64 SystemUUID:68fa82f9-afdb-4f4d-b785-1aefd1680b64 BootID:8450f042-88d6-4841-ac46-8e16fb0e4c12 Filesystems:[{Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e/userdata/shm DeviceMajor:0 DeviceMinor:641 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-995 DeviceMajor:0 DeviceMinor:995 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~secret/prometheus-operator-tls DeviceMajor:0 DeviceMinor:1068 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-918 DeviceMajor:0 DeviceMinor:918 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:619 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:549 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-652 DeviceMajor:0 DeviceMinor:652 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-258 DeviceMajor:0 DeviceMinor:258 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d/userdata/shm DeviceMajor:0 DeviceMinor:339 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1074 DeviceMajor:0 DeviceMinor:1074 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1094 DeviceMajor:0 DeviceMinor:1094 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/702713f2f96146013bc9672b7b029fe7154bd722d3f9153e565a46fd2b9a50ba/userdata/shm DeviceMajor:0 DeviceMinor:444 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-461 DeviceMajor:0 DeviceMinor:461 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8c5a039db74fb9e788a5aa01defc8a1f9fd1088c2644177e24de4994f3a27cd3/userdata/shm DeviceMajor:0 DeviceMinor:807 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-911 DeviceMajor:0 DeviceMinor:911 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ccabd735cd283aaf872e4d4c6439fc21d25d047aca8d8580112cec5049c44ca7/userdata/shm DeviceMajor:0 DeviceMinor:898 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-782 DeviceMajor:0 DeviceMinor:782 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~projected/kube-api-access-f92mb DeviceMajor:0 DeviceMinor:273 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:547 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-600 DeviceMajor:0 DeviceMinor:600 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-169 DeviceMajor:0 DeviceMinor:169 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:437 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5a96373b7ec998e4c12966e11a5d5e48263b669f4268036f6aff8f1f1199dfa5/userdata/shm DeviceMajor:0 DeviceMinor:829 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e752098827604ca63ef6b84cdd36804c65e5654f7ec3055912844eb8b6ef68db/userdata/shm DeviceMajor:0 DeviceMinor:838 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a88b1c81-02b5-4c85-9660-5f84c900a946/volumes/kubernetes.io~projected/kube-api-access-5zf6h DeviceMajor:0 DeviceMinor:1201 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-255 DeviceMajor:0 DeviceMinor:255 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-346 DeviceMajor:0 DeviceMinor:346 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4f6c819a-5074-4d29-84c8-e187528ad757/volumes/kubernetes.io~projected/kube-api-access-mm9l9 DeviceMajor:0 DeviceMinor:622 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b/userdata/shm DeviceMajor:0 DeviceMinor:235 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1206 DeviceMajor:0 DeviceMinor:1206 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-723 DeviceMajor:0 DeviceMinor:723 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-805 DeviceMajor:0 DeviceMinor:805 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:492 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-793 DeviceMajor:0 DeviceMinor:793 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-846 DeviceMajor:0 DeviceMinor:846 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~projected/kube-api-access-lzprw DeviceMajor:0 DeviceMinor:265 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:415 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-697 DeviceMajor:0 DeviceMinor:697 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-77 DeviceMajor:0 DeviceMinor:77 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/45b3c788-eb83-448a-bc60-90b8ace28382/volumes/kubernetes.io~projected/kube-api-access-7pcbj DeviceMajor:0 DeviceMinor:1187 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b097596e-79e1-44d1-be8a-96340042a041/volumes/kubernetes.io~projected/kube-api-access-dx99f DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5/userdata/shm DeviceMajor:0 DeviceMinor:447 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~projected/kube-api-access-wkh2f DeviceMajor:0 DeviceMinor:1151 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1024 DeviceMajor:0 DeviceMinor:1024 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1117 DeviceMajor:0 DeviceMinor:1117 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-75 DeviceMajor:0 DeviceMinor:75 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97/userdata/shm DeviceMajor:0 DeviceMinor:557 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/581a8be2-d16c-4fd8-b051-214bd60a2a91/volumes/kubernetes.io~projected/kube-api-access-ssmph DeviceMajor:0 DeviceMinor:728 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810/userdata/shm DeviceMajor:0 DeviceMinor:774 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:418 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-733 DeviceMajor:0 DeviceMinor:733 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~projected/kube-api-access-dtt44 DeviceMajor:0 DeviceMinor:1090 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes/kubernetes.io~projected/kube-api-access-92pwh DeviceMajor:0 DeviceMinor:773 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/kube-api-access-5r8zt DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/80ddf0a4-e853-4de0-b540-81144dfdd31d/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:785 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca6e644f-c53b-41dd-a16f-9fb9997533dd/volumes/kubernetes.io~projected/kube-api-access-nf5kc DeviceMajor:0 DeviceMinor:308 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-453 DeviceMajor:0 DeviceMinor:453 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1115 DeviceMajor:0 DeviceMinor:1115 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0f725c4a-234c-44e9-95f2-73f31d2b0fd3/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:311 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b6610936-e14a-4532-955c-ea1ee4222259/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:824 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1154 DeviceMajor:0 DeviceMinor:1154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7d9c365d304102d31836e754ae3ccd0da492c6691ee23225b141aea9b82a5d5/userdata/shm DeviceMajor:0 DeviceMinor:1021 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6/userdata/shm DeviceMajor:0 DeviceMinor:1191 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-905 DeviceMajor:0 DeviceMinor:905 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aab851b1602b7dcc6e5620b34b9265b9ec9a6fe42b3748c9be972ac30f7ef4fd/userdata/shm DeviceMajor:0 DeviceMinor:564 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-590 DeviceMajor:0 DeviceMinor:590 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-689 DeviceMajor:0 DeviceMinor:689 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-790 DeviceMajor:0 DeviceMinor:790 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6163bd4b-dc83-4e83-8590-5ac4753bda1c/volumes/kubernetes.io~projected/kube-api-access-zbzl9 DeviceMajor:0 DeviceMinor:897 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1125 DeviceMajor:0 DeviceMinor:1125 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~projected/kube-api-access-jxqp4 DeviceMajor:0 DeviceMinor:419 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-653 DeviceMajor:0 DeviceMinor:653 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1040 DeviceMajor:0 DeviceMinor:1040 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916/userdata/shm DeviceMajor:0 DeviceMinor:284 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:436 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:548 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/volumes/kubernetes.io~projected/kube-api-access-x82xz DeviceMajor:0 DeviceMinor:796 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-455 DeviceMajor:0 DeviceMinor:455 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a62432d7ca6978a89473ee0ca3560d8d6e151e4b44cc680fcbcde36344cda3f/userdata/shm DeviceMajor:0 DeviceMinor:1018 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-268 DeviceMajor:0 DeviceMinor:268 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:779 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-842 DeviceMajor:0 DeviceMinor:842 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-499 DeviceMajor:0 DeviceMinor:499 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~projected/kube-api-access-w5wnd DeviceMajor:0 DeviceMinor:536 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-606 DeviceMajor:0 DeviceMinor:606 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~projected/kube-api-access-pw6sv DeviceMajor:0 DeviceMinor:1013 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-970 DeviceMajor:0 DeviceMinor:970 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:1014 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b9cc3cdb71ca86a1d6eb5065d5ba830d901adeb7f41acd8f39de6f44ff6001ce/userdata/shm DeviceMajor:0 DeviceMinor:381 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-737 DeviceMajor:0 DeviceMinor:737 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/volumes/kubernetes.io~projected/kube-api-access-hnk9k DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/890a6c24-1dbb-4331-952b-5712ac00788e/volumes/kubernetes.io~projected/kube-api-access-7bxn6 DeviceMajor:0 DeviceMinor:349 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1113 DeviceMajor:0 DeviceMinor:1113 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be/userdata/shm DeviceMajor:0 DeviceMinor:1015 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1076 DeviceMajor:0 DeviceMinor:1076 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-410 DeviceMajor:0 DeviceMinor:410 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0e79950f-50a5-46ec-b836-7a35dcce2851/volumes/kubernetes.io~projected/kube-api-access-rdsv9 DeviceMajor:0 DeviceMinor:261 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-304 DeviceMajor:0 DeviceMinor:304 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510/userdata/shm DeviceMajor:0 DeviceMinor:567 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/41253bde-5d09-4ff0-8e7c-4a21fe2b7106/volumes/kubernetes.io~projected/kube-api-access-dbtnq DeviceMajor:0 DeviceMinor:556 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/64d09f81-5fb6-462a-a736-5649779a6b1a/volumes/kubernetes.io~projected/kube-api-access-7w8xs DeviceMajor:0 DeviceMinor:627 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-957 DeviceMajor:0 DeviceMinor:957 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-298 DeviceMajor:0 DeviceMinor:298 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:734 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3/userdata/shm DeviceMajor:0 DeviceMinor:233 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~projected/kube-api-access-9j527 DeviceMajor:0 DeviceMinor:249 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee/userdata/shm DeviceMajor:0 DeviceMinor:1058 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-594 DeviceMajor:0 DeviceMinor:594 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:623 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~projected/kube-api-access-56bt6 DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6d62448d-55f1-4bdc-85aa-09e7bdf766cc/volumes/kubernetes.io~projected/kube-api-access-n9mbs DeviceMajor:0 DeviceMinor:827 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1038 DeviceMajor:0 DeviceMinor:1038 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~secret/certs DeviceMajor:0 DeviceMinor:1048 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-980 DeviceMajor:0 DeviceMinor:980 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~projected/kube-api-access-hdqzn DeviceMajor:0 DeviceMinor:1089 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-895 DeviceMajor:0 DeviceMinor:895 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-598 DeviceMajor:0 DeviceMinor:598 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5/userdata/shm DeviceMajor:0 DeviceMinor:579 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:480 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc/volumes/kubernetes.io~projected/kube-api-access-4vm9c DeviceMajor:0 DeviceMinor:638 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~secret/kube-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1082 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1139 DeviceMajor:0 DeviceMinor:1139 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-840 DeviceMajor:0 DeviceMinor:840 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-409 DeviceMajor:0 DeviceMinor:409 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65157a9b-3df7-4cc1-a85a-a5dfa59921ad/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b36d4f6b43dcaa09ca3c55b7c20167210b34481854d09dfefb8adca147e001f9/userdata/shm DeviceMajor:0 DeviceMinor:836 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-368 DeviceMajor:0 DeviceMinor:368 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1121 DeviceMajor:0 DeviceMinor:1121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1204 DeviceMajor:0 DeviceMinor:1204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~secret/node-exporter-tls DeviceMajor:0 DeviceMinor:1097 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327/userdata/shm DeviceMajor:0 DeviceMinor:266 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97/userdata/shm DeviceMajor:0 DeviceMinor:1152 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957/userdata/shm DeviceMajor:0 DeviceMinor:42 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~projected/kube-api-access-w4sfm DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~projected/kube-api-access-v86j8 DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/volumes/kubernetes.io~projected/kube-api-access-l4w7k DeviceMajor:0 DeviceMinor:624 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6d62448d-55f1-4bdc-85aa-09e7bdf766cc/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:813 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0f725c4a-234c-44e9-95f2-73f31d2b0fd3/volumes/kubernetes.io~projected/kube-api-access-r22fm DeviceMajor:0 DeviceMinor:312 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-393 DeviceMajor:0 DeviceMinor:393 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-282 DeviceMajor:0 DeviceMinor:282 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421/userdata/shm DeviceMajor:0 DeviceMinor:1070 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1193 DeviceMajor:0 DeviceMinor:1193 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-96 DeviceMajor:0 DeviceMinor:96 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/39979795a082384fa347e48c6bcdc4249850e6dc951d407d07457e2b43d36f11/userdata/shm DeviceMajor:0 DeviceMinor:420 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-585 DeviceMajor:0 DeviceMinor:585 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-164 DeviceMajor:0 DeviceMinor:164 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ca56e37d-80ea-432b-a6d9-f4e904a40e10/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:417 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1035 DeviceMajor:0 DeviceMinor:1035 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-63 DeviceMajor:0 DeviceMinor:63 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-186 DeviceMajor:0 DeviceMinor:186 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:488 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a/userdata/shm DeviceMajor:0 DeviceMinor:1106 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-539 DeviceMajor:0 DeviceMinor:539 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf/volumes/kubernetes.io~projected/kube-api-access-fvxjl DeviceMajor:0 DeviceMinor:572 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b6610936-e14a-4532-955c-ea1ee4222259/volumes/kubernetes.io~projected/kube-api-access-v8plf DeviceMajor:0 DeviceMinor:828 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f202273a-b111-46ce-b404-7e481d2c7ff9/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:438 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-364 DeviceMajor:0 DeviceMinor:364 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e/userdata/shm DeviceMajor:0 DeviceMinor:449 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-459 DeviceMajor:0 DeviceMinor:459 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf/userdata/shm DeviceMajor:0 DeviceMinor:559 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8/userdata/shm DeviceMajor:0 DeviceMinor:288 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f8f3e1fa6ad1dbd5474f44502cbcf37e1e64719e20d78c379498d77edb6fab10/userdata/shm DeviceMajor:0 DeviceMinor:552 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1098 DeviceMajor:0 DeviceMinor:1098 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-286 DeviceMajor:0 DeviceMinor:286 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-573 DeviceMajor:0 DeviceMinor:573 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7949621e-4da6-4e43-a1f3-2ef303bf6aa6/volumes/kubernetes.io~projected/kube-api-access-j5hsj DeviceMajor:0 DeviceMinor:92 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:442 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6b78ee1b02c98b4ad9c3b944fdd43e9881371557e0d7b10564d5be8bd02396af/userdata/shm DeviceMajor:0 DeviceMinor:649 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83/userdata/shm DeviceMajor:0 DeviceMinor:105 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:546 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a/userdata/shm DeviceMajor:0 DeviceMinor:560 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~secret/prometheus-operator-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1064 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~projected/kube-api-access-b67hn DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3/userdata/shm DeviceMajor:0 DeviceMinor:639 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1156 DeviceMajor:0 DeviceMinor:1156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-984 DeviceMajor:0 DeviceMinor:984 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-401 DeviceMajor:0 DeviceMinor:401 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~projected/kube-api-access-55l9j DeviceMajor:0 DeviceMinor:254 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:532 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-155 DeviceMajor:0 DeviceMinor:155 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/00350ac7-b40a-4459-b94c-a37d7b613645/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:551 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-948 DeviceMajor:0 DeviceMinor:948 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-395 DeviceMajor:0 DeviceMinor:395 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/var/lib/kubelet/pods/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/volumes/kubernetes.io~projected/kube-api-access-hlgd7 DeviceMajor:0 DeviceMinor:833 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1010 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09/userdata/shm DeviceMajor:0 DeviceMinor:647 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-752 DeviceMajor:0 DeviceMinor:752 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-799 DeviceMajor:0 DeviceMinor:799 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d125bc5-08ce-434a-bde7-0ba8fc0169ea/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:784 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-855 DeviceMajor:0 DeviceMinor:855 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-955 DeviceMajor:0 DeviceMinor:955 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-46 DeviceMajor:0 DeviceMinor:46 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/88728a20ccc0653acaf97665b53dae69b14ad65649feac36dc7ea652a98e2296/userdata/shm DeviceMajor:0 DeviceMinor:510 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-760 DeviceMajor:0 DeviceMinor:760 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/581a8be2-d16c-4fd8-b051-214bd60a2a91/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:666 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047/volumes/kubernetes.io~projected/kube-api-access-ncztx DeviceMajor:0 DeviceMinor:1012 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/fec3170d-3f3e-42f5-b20a-da53721c0dac/volumes/kubernetes.io~projected/kube-api-access-tqmzh DeviceMajor:0 DeviceMinor:250 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/bca4cc7c-839d-4877-b0aa-c07607fea404/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:464 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1055 DeviceMajor:0 DeviceMinor:1055 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c/volumes/kubernetes.io~projected/kube-api-access-rgl8m DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/acbaba45-12d9-40b9-818c-4b091d7929b1/volumes/kubernetes.io~projected/kube-api-access-kcgqr DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-999 DeviceMajor:0 DeviceMinor:999 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-469 DeviceMajor:0 DeviceMinor:469 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23003a2f-2053-47cc-8133-23eb886d4da0/volumes/kubernetes.io~projected/kube-api-access-q7gdm DeviceMajor:0 DeviceMinor:257 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28/userdata/shm DeviceMajor:0 DeviceMinor:350 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1746482a-d1a3-4eac-8bc9-643b6af75163/volumes/kubernetes.io~projected/kube-api-access-2dkqm DeviceMajor:0 DeviceMinor:380 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-670 DeviceMajor:0 DeviceMinor:670 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b7b1e72d13c6e7c1a14867c5547562b82b9b40ac636f0328d795dcff8a14b2b8/userdata/shm DeviceMajor:0 DeviceMinor:776 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b1a6bfe0069db4370471806f444b8cbb38ac33f0aab60a3239aafba8901aaf7e/userdata/shm DeviceMajor:0 DeviceMinor:553 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-335 DeviceMajor:0 DeviceMinor:335 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1005 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bce60995e913b204c4470a4a4b36d406c096a66e95b110179e1a1c0fbcc39e0a/userdata/shm DeviceMajor:0 DeviceMinor:808 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-939 DeviceMajor:0 DeviceMinor:939 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/44bc88d8-9e01-4521-a704-85d9ca095baa/volumes/kubernetes.io~secret/kube-state-metrics-tls DeviceMajor:0 DeviceMinor:1096 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1123 DeviceMajor:0 DeviceMinor:1123 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-978 DeviceMajor:0 DeviceMinor:978 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-300 DeviceMajor:0 DeviceMinor:300 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef964aa716088965516a6b12f87facd648776f7eece032982375b00853e3a703/userdata/shm DeviceMajor:0 DeviceMinor:501 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e9c0293a-5340-4ebe-bc8f-43e78ba9f280/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:707 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-755 DeviceMajor:0 DeviceMinor:755 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947/userdata/shm DeviceMajor:0 DeviceMinor:68 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/41253bde-5d09-4ff0-8e7c-4a21fe2b7106/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:578 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-688 DeviceMajor:0 DeviceMinor:688 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-719 DeviceMajor:0 DeviceMinor:719 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-457 DeviceMajor:0 DeviceMinor:457 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-525 DeviceMajor:0 DeviceMinor:525 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-908 DeviceMajor:0 DeviceMinor:908 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~projected/kube-api-access-tfgfz DeviceMajor:0 DeviceMinor:1091 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-604 DeviceMajor:0 DeviceMinor:604 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:777 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:476 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1026 DeviceMajor:0 DeviceMinor:1026 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71ca96e8-5108-455c-bb3c-17977d38e912/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:272 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/32d9278f90869a47d37ec354771e3c987fb65e24d65a9e7aa9b31e8b1fade86f/userdata/shm DeviceMajor:0 DeviceMinor:825 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes/kubernetes.io~projected/kube-ap Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: i-access-zpksq DeviceMajor:0 DeviceMinor:735 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a88b1c81-02b5-4c85-9660-5f84c900a946/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:1197 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-61 DeviceMajor:0 DeviceMinor:61 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/09a5682c-4f13-4b8c-8179-3e6dfa8f98db/volumes/kubernetes.io~projected/kube-api-access-8xv94 DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/54f91a8b386ea81f3c1ff44f7cbcccad1987fab184d5bfad4c46374f7827fa5c/userdata/shm DeviceMajor:0 DeviceMinor:993 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-107 DeviceMajor:0 DeviceMinor:107 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~secret/openshift-state-metrics-tls DeviceMajor:0 DeviceMinor:1087 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~projected/kube-api-access-wpr8b DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:535 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-73 DeviceMajor:0 DeviceMinor:73 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-465 DeviceMajor:0 DeviceMinor:465 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9ce482dc-d0ac-40bc-9058-a1cfdc81575e/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:541 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-479 DeviceMajor:0 DeviceMinor:479 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-751 DeviceMajor:0 DeviceMinor:751 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~projected/kube-api-access-tvqv5 DeviceMajor:0 DeviceMinor:621 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7ed933ad5ab2402e750d28bcdcc40b75fc2d12d35fd030d2dca7b16f6da20585/userdata/shm DeviceMajor:0 DeviceMinor:669 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218/userdata/shm DeviceMajor:0 DeviceMinor:1111 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~projected/kube-api-access-2dkgv DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9635cdae-0983-4c97-b3ed-dc7a785b1bb6/volumes/kubernetes.io~projected/kube-api-access-zmssd DeviceMajor:0 DeviceMinor:405 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-862 DeviceMajor:0 DeviceMinor:862 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-149 DeviceMajor:0 DeviceMinor:149 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~projected/kube-api-access-v29ws DeviceMajor:0 DeviceMinor:788 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-357 DeviceMajor:0 DeviceMinor:357 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~projected/kube-api-access-s2j6m DeviceMajor:0 DeviceMinor:127 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-183 DeviceMajor:0 DeviceMinor:183 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8278eeebf68b018edbef1798293f552dd9859c6fa057a3f48528a25426e7abf3/userdata/shm DeviceMajor:0 DeviceMinor:309 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/66f60747a10071044a32fdd3eb286bdb47b644ac36047fe8a2be062c88967367/userdata/shm DeviceMajor:0 DeviceMinor:835 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5707066a-bd66-41bc-8cea-cff1630ab5ee/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:550 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d/userdata/shm DeviceMajor:0 DeviceMinor:1202 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1030 DeviceMajor:0 DeviceMinor:1030 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1072 DeviceMajor:0 DeviceMinor:1072 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bca4cc7c-839d-4877-b0aa-c07607fea404/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:463 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6a6e991-c861-48f5-bfde-78762a037343/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:988 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/secret-metrics-client-certs DeviceMajor:0 DeviceMinor:1150 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e89571b2-098c-495b-9b53-c4ebd95296ab/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1011 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11/userdata/shm DeviceMajor:0 DeviceMinor:1020 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1062 DeviceMajor:0 DeviceMinor:1062 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6163bd4b-dc83-4e83-8590-5ac4753bda1c/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:834 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/56970553-2ac8-4cb5-a12a-b7c1e777c587/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:725 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~projected/kube-api-access-btwhr DeviceMajor:0 DeviceMinor:1057 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-99 DeviceMajor:0 DeviceMinor:99 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1060 DeviceMajor:0 DeviceMinor:1060 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ff2dfe9d-2834-43cb-b093-0831b2b87131/volumes/kubernetes.io~projected/kube-api-access-zsj2w DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-608 DeviceMajor:0 DeviceMinor:608 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5c5ae9bfcc3ce85bdfe3cccc194f20c35db6cc7998e4967e566b59f8729c9691/userdata/shm DeviceMajor:0 DeviceMinor:490 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f/userdata/shm DeviceMajor:0 DeviceMinor:668 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-331 DeviceMajor:0 DeviceMinor:331 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-421 DeviceMajor:0 DeviceMinor:421 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/volumes/kubernetes.io~projected/kube-api-access-mmk45 DeviceMajor:0 DeviceMinor:1017 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-161 DeviceMajor:0 DeviceMinor:161 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d/volumes/kubernetes.io~secret/node-bootstrap-token DeviceMajor:0 DeviceMinor:1049 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17/userdata/shm DeviceMajor:0 DeviceMinor:60 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3065e4b4-4493-41ce-b9d2-89315475f74f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6a6a187d-5b25-4d63-939e-c04e07369371/volumes/kubernetes.io~projected/kube-api-access-br4bc DeviceMajor:0 DeviceMinor:489 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-471 DeviceMajor:0 DeviceMinor:471 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-467 DeviceMajor:0 DeviceMinor:467 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/kube-api-access-8qqcw DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20ff930f-ec0d-40ed-a879-1546691f685d/volumes/kubernetes.io~projected/kube-api-access-d5v7l DeviceMajor:0 DeviceMinor:253 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-687 DeviceMajor:0 DeviceMinor:687 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc/userdata/shm DeviceMajor:0 DeviceMinor:374 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e9c0293a-5340-4ebe-bc8f-43e78ba9f280/volumes/kubernetes.io~projected/kube-api-access-ns97v DeviceMajor:0 DeviceMinor:729 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7ab32efc-7cc5-4e36-9c1c-05efb19914e2/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:545 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f6a6e991-c861-48f5-bfde-78762a037343/volumes/kubernetes.io~projected/kube-api-access-rf9kc DeviceMajor:0 DeviceMinor:992 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-537 DeviceMajor:0 DeviceMinor:537 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~projected/kube-api-access-swxwt DeviceMajor:0 DeviceMinor:91 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-174 DeviceMajor:0 DeviceMinor:174 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:789 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980/userdata/shm DeviceMajor:0 DeviceMinor:296 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-487 DeviceMajor:0 DeviceMinor:487 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-575 DeviceMajor:0 DeviceMinor:575 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/61ab4d32-c732-4be5-aa85-a2e1dd21cb60/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a/userdata/shm DeviceMajor:0 DeviceMinor:276 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-294 DeviceMajor:0 DeviceMinor:294 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1028 DeviceMajor:0 DeviceMinor:1028 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/210dd7f0-d1c0-407a-b89b-f11ef605e5df/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/57189f7c-5987-457d-a299-0a6b9bcb3e24/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:441 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-655 DeviceMajor:0 DeviceMinor:655 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/56970553-2ac8-4cb5-a12a-b7c1e777c587/volumes/kubernetes.io~projected/kube-api-access-zrbnx DeviceMajor:0 DeviceMinor:732 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0cb6d987-4b59-4fd9-889a-3250c12a726c/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:787 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/22ff82cf-0d7d-4955-9b7c-97757acbc021/volumes/kubernetes.io~projected/kube-api-access-sglvd DeviceMajor:0 DeviceMinor:104 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2faf85a2-29bb-4275-a12b-0ef1663a4f0d/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-874 DeviceMajor:0 DeviceMinor:874 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-696 DeviceMajor:0 DeviceMinor:696 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d26f719-43b9-4c1c-9a54-ff800177db68/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:440 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875/userdata/shm DeviceMajor:0 DeviceMinor:591 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-577 DeviceMajor:0 DeviceMinor:577 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1119 DeviceMajor:0 DeviceMinor:1119 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1099 DeviceMajor:0 DeviceMinor:1099 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1208 DeviceMajor:0 DeviceMinor:1208 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/client-ca-bundle DeviceMajor:0 DeviceMinor:1149 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/80ddf0a4-e853-4de0-b540-81144dfdd31d/volumes/kubernetes.io~projected/kube-api-access-pgffp DeviceMajor:0 DeviceMinor:803 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2d125bc5-08ce-434a-bde7-0ba8fc0169ea/volumes/kubernetes.io~projected/kube-api-access-hmb9v DeviceMajor:0 DeviceMinor:804 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-844 DeviceMajor:0 DeviceMinor:844 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/volumes/kubernetes.io~secret/openshift-state-metrics-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1086 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-503 DeviceMajor:0 DeviceMinor:503 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6018dc62d387a9b77f99180b9b59d3182e437f628eb7fce91bb3764fe4982ba6/userdata/shm DeviceMajor:0 DeviceMinor:950 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0ad95adc-2e0f-4e95-94e7-66e6d240a930/volumes/kubernetes.io~projected/kube-api-access-5lnpz DeviceMajor:0 DeviceMinor:1069 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-780 DeviceMajor:0 DeviceMinor:780 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750/userdata/shm DeviceMajor:0 DeviceMinor:278 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-302 DeviceMajor:0 DeviceMinor:302 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5baf379ef595e5427aa5f7376ffa996583f39c05c81ca9fe28df973ed2c426be/userdata/shm DeviceMajor:0 DeviceMinor:811 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/123f1ecb-cc03-462b-b76f-7251bf69d3d6/volumes/kubernetes.io~secret/node-exporter-kube-rbac-proxy-config DeviceMajor:0 DeviceMinor:1088 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e9425526-9f51-4302-a19d-a8107f56c582/volumes/kubernetes.io~projected/kube-api-access-z5kbh DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/14ef046f-b284-457f-ad7a-b7958cb82dd5/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1009 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9d653bfa-7168-49fa-a838-aedb33c7e60f/volumes/kubernetes.io~projected/kube-api-access-8jmlf DeviceMajor:0 DeviceMinor:139 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d26a4fce-8eed-44d0-96a3-40ffd0b336a6/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e19e3ca7f7f87202999ccf51b5e641a2b701234ac17e2a8733f102ed0960e44b/userdata/shm DeviceMajor:0 DeviceMinor:443 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-612 DeviceMajor:0 DeviceMinor:612 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3/userdata/shm DeviceMajor:0 DeviceMinor:231 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-274 DeviceMajor:0 DeviceMinor:274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/08d9196b-b68f-421b-8754-bfbaa4020a97/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:620 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-147 DeviceMajor:0 DeviceMinor:147 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-389 DeviceMajor:0 DeviceMinor:389 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-636 DeviceMajor:0 DeviceMinor:636 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1746482a-d1a3-4eac-8bc9-643b6af75163/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:379 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-768 DeviceMajor:0 DeviceMinor:768 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/22f85e98-eb36-46b2-ab5d-7c21e060cba5/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25/userdata/shm DeviceMajor:0 DeviceMinor:247 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e/userdata/shm DeviceMajor:0 DeviceMinor:555 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-876 DeviceMajor:0 DeviceMinor:876 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-916 DeviceMajor:0 DeviceMinor:916 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/volumes/kubernetes.io~projected/kube-api-access-rqgkl DeviceMajor:0 DeviceMinor:318 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes/kubernetes.io~secret/secret-metrics-server-tls DeviceMajor:0 DeviceMinor:1145 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-58 DeviceMajor:0 DeviceMinor:58 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/64ca7ad287a18077a9681b1e546ec20fe155067ef4ae153360b9f6ad5ecbcb02/userdata/shm DeviceMajor:0 DeviceMinor:1092 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:0118b40880c157c MacAddress:4a:44:a2:ab:4c:84 Speed:10000 Mtu:8900} {Name:08f22a0ccc0a77a MacAddress:3e:e1:f1:ba:d9:cc Speed:10000 Mtu:8900} {Name:0b7224d61042a39 MacAddress:d6:4a:c0:94:15:1d Speed:10000 Mtu:8900} {Name:14986cdcb6c65fc MacAddress:7e:94:8a:a1:86:7e Speed:10000 Mtu:8900} {Name:1522904bcce5d0a MacAddress:8a:c0:df:d7:ab:00 Speed:10000 Mtu:8900} {Name:23b5f0e312ee437 MacAddress:7a:df:88:79:d9:92 Speed:10000 Mtu:8900} {Name:32d9278f90869a4 MacAddress:36:cd:52:f7:0a:fd Speed:10000 Mtu:8900} {Name:33917a2945cfdd9 MacAddress:f6:69:f7:5a:f9:2c Speed:10000 Mtu:8900} {Name:36fd86042cdc5d3 MacAddress:8e:51:a2:a3:be:84 Speed:10000 Mtu:8900} {Name:389639e7370bc06 MacAddress:1e:f8:89:e1:e7:c4 Speed:10000 Mtu:8900} {Name:39979795a082384 MacAddress:46:2d:9f:1a:1a:83 Speed:10000 Mtu:8900} {Name:3e688aec660d80e MacAddress:9e:ed:1e:32:89:6c Speed:10000 Mtu:8900} {Name:46c99f0233d1af2 MacAddress:be:d8:ec:f2:51:ed Speed:10000 Mtu:8900} {Name:4767ac5e1fdc332 MacAddress:9a:0e:11:c3:86:d0 Speed:10000 Mtu:8900} {Name:4a62432d7ca6978 MacAddress:1e:1d:46:c8:ba:06 Speed:10000 Mtu:8900} {Name:4aa19d8b0c30c05 MacAddress:12:78:11:a2:63:ce Speed:10000 Mtu:8900} {Name:54f91a8b386ea81 MacAddress:ae:a2:58:43:d3:d1 Speed:10000 Mtu:8900} {Name:5a96373b7ec998e MacAddress:92:88:58:91:87:cf Speed:10000 Mtu:8900} {Name:5baf379ef595e54 MacAddress:d6:d9:03:6b:18:f0 Speed:10000 Mtu:8900} {Name:5c5ae9bfcc3ce85 MacAddress:fa:bd:28:62:69:00 Speed:10000 Mtu:8900} {Name:60a1d6091214ba0 MacAddress:52:d5:c7:59:c0:95 Speed:10000 Mtu:8900} {Name:64ca7ad287a1807 MacAddress:16:b2:2b:c2:c4:ef Speed:10000 Mtu:8900} {Name:66f60747a100710 MacAddress:2a:d3:82:c2:ac:75 Speed:10000 Mtu:8900} {Name:68a469f8af4eca3 MacAddress:ea:a5:1a:60:fc:16 Speed:10000 Mtu:8900} {Name:6b78ee1b02c98b4 MacAddress:76:9f:5f:19:c3:77 Speed:10000 Mtu:8900} {Name:702713f2f961460 MacAddress:52:1b:74:45:e6:62 Speed:10000 Mtu:8900} {Name:7551d0384a0ca5d MacAddress:3a:2c:d5:31:54:80 Speed:10000 Mtu:8900} {Name:8278eeebf68b018 MacAddress:46:65:2d:2e:7d:20 Speed:10000 Mtu:8900} {Name:88728a20ccc0653 MacAddress:0a:46:70:ab:ca:f5 Speed:10000 Mtu:8900} {Name:8c5a039db74fb9e MacAddress:26:a1:c8:58:20:1a Speed:10000 Mtu:8900} {Name:91fac3ac168ae94 MacAddress:8e:46:0f:c3:1d:5c Speed:10000 Mtu:8900} {Name:9540823dea8e010 MacAddress:46:d9:c7:ff:0a:d4 Speed:10000 Mtu:8900} {Name:a48f9dbca67b195 MacAddress:ca:4f:97:20:c9:4f Speed:10000 Mtu:8900} {Name:a5a71eafba7fd09 MacAddress:e6:2b:90:2d:6f:e5 Speed:10000 Mtu:8900} {Name:a7182dd72430d58 MacAddress:1a:9b:2a:dc:e2:06 Speed:10000 Mtu:8900} {Name:a9a866857afbf6e MacAddress:32:69:cb:3f:6d:e3 Speed:10000 Mtu:8900} {Name:aab851b1602b7dc MacAddress:5e:12:71:6a:c2:1f Speed:10000 Mtu:8900} {Name:b1a6bfe0069db43 MacAddress:fe:00:1e:79:24:f6 Speed:10000 Mtu:8900} {Name:b3076d6176cd94c MacAddress:66:4b:e3:44:ad:8f Speed:10000 Mtu:8900} {Name:b36d4f6b43dcaa0 MacAddress:ca:19:a2:a8:76:25 Speed:10000 Mtu:8900} {Name:b7b1e72d13c6e7c MacAddress:5a:fa:43:05:b6:47 Speed:10000 Mtu:8900} {Name:b7d9c365d304102 MacAddress:0e:fb:00:a2:71:b4 Speed:10000 Mtu:8900} {Name:b9cc3cdb71ca86a MacAddress:06:e8:fe:7f:d8:9c Speed:10000 Mtu:8900} {Name:bce60995e913b20 MacAddress:4a:9a:ca:49:e4:f9 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:9a:ca:04:00:30:20 Speed:0 Mtu:8900} {Name:c1a1f09a0076728 MacAddress:b2:52:2a:38:69:fb Speed:10000 Mtu:8900} {Name:c29f56d4ea9bf3b MacAddress:ba:4e:37:e4:af:a7 Speed:10000 Mtu:8900} {Name:c36c31fbbcf87c5 MacAddress:7a:fb:2a:36:b8:19 Speed:10000 Mtu:8900} {Name:c6de2a3b0d9d7c8 MacAddress:8a:5d:f0:a5:5d:13 Speed:10000 Mtu:8900} {Name:cf10038472bbf51 MacAddress:82:98:0f:3e:6a:0e Speed:10000 Mtu:8900} {Name:d80ff220fd3e8f2 MacAddress:1e:81:e5:53:98:83 Speed:10000 Mtu:8900} {Name:e19e3ca7f7f8720 MacAddress:d6:1d:5d:3b:b5:bb Speed:10000 Mtu:8900} {Name:e752098827604ca MacAddress:5a:d7:a2:e7:80:f1 Speed:10000 Mtu:8900} {Name:e9929238f90c11c MacAddress:a2:75:5f:2c:58:e9 Speed:10000 Mtu:8900} {Name:ebb4000c1fd7b5e MacAddress:32:c8:fb:fd:82:26 Speed:10000 Mtu:8900} {Name:ef964aa71608896 MacAddress:ea:db:a6:00:f3:66 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:64:4a:87 Speed:-1 Mtu:9000} {Name:f13b0447f1cf8eb MacAddress:c6:02:43:9c:b6:67 Speed:10000 Mtu:8900} {Name:f8f3e1fa6ad1dbd MacAddress:f6:75:5b:bb:26:88 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:1e:1b:15:bf:6f:99 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.008606 27820 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.008706 27820 manager.go:233] Version: {KernelVersion:5.14.0-427.113.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202603021444-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.008941 27820 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009139 27820 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009166 27820 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009400 27820 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009413 27820 container_manager_linux.go:303] "Creating device plugin manager" Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009424 27820 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009445 27820 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 08:49:50.009473 master-0 kubenswrapper[27820]: I0320 08:49:50.009480 27820 state_mem.go:36] "Initialized new in-memory state store" Mar 20 08:49:50.010870 master-0 kubenswrapper[27820]: I0320 08:49:50.009596 27820 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 20 08:49:50.010870 master-0 kubenswrapper[27820]: I0320 08:49:50.009665 27820 kubelet.go:418] "Attempting to sync node with API server" Mar 20 08:49:50.010870 master-0 kubenswrapper[27820]: I0320 08:49:50.009681 27820 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 08:49:50.010870 master-0 kubenswrapper[27820]: I0320 08:49:50.009697 27820 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 08:49:50.010870 master-0 kubenswrapper[27820]: I0320 08:49:50.009712 27820 kubelet.go:324] "Adding apiserver pod source" Mar 20 08:49:50.010870 master-0 kubenswrapper[27820]: I0320 08:49:50.009725 27820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 08:49:50.011579 master-0 kubenswrapper[27820]: I0320 08:49:50.010940 27820 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 20 08:49:50.011625 master-0 kubenswrapper[27820]: I0320 08:49:50.011603 27820 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 20 08:49:50.012075 master-0 kubenswrapper[27820]: I0320 08:49:50.012040 27820 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 08:49:50.012327 master-0 kubenswrapper[27820]: I0320 08:49:50.012299 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 08:49:50.012436 master-0 kubenswrapper[27820]: I0320 08:49:50.012337 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 08:49:50.012436 master-0 kubenswrapper[27820]: I0320 08:49:50.012352 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 08:49:50.012436 master-0 kubenswrapper[27820]: I0320 08:49:50.012366 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 08:49:50.012436 master-0 kubenswrapper[27820]: I0320 08:49:50.012411 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 08:49:50.012436 master-0 kubenswrapper[27820]: I0320 08:49:50.012430 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012444 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012458 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012471 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012482 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012498 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012520 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 08:49:50.012624 master-0 kubenswrapper[27820]: I0320 08:49:50.012581 27820 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 08:49:50.013816 master-0 kubenswrapper[27820]: I0320 08:49:50.013786 27820 server.go:1280] "Started kubelet" Mar 20 08:49:50.015824 master-0 kubenswrapper[27820]: I0320 08:49:50.015718 27820 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 08:49:50.016235 master-0 kubenswrapper[27820]: I0320 08:49:50.015842 27820 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 20 08:49:50.016648 master-0 kubenswrapper[27820]: I0320 08:49:50.016615 27820 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 08:49:50.016740 master-0 kubenswrapper[27820]: I0320 08:49:50.016696 27820 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 08:49:50.024997 master-0 kubenswrapper[27820]: I0320 08:49:50.024866 27820 server.go:449] "Adding debug handlers to kubelet server" Mar 20 08:49:50.025119 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 20 08:49:50.025885 master-0 kubenswrapper[27820]: I0320 08:49:50.025839 27820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 08:49:50.029601 master-0 kubenswrapper[27820]: I0320 08:49:50.028755 27820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 08:49:50.034610 master-0 kubenswrapper[27820]: I0320 08:49:50.034566 27820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 20 08:49:50.034698 master-0 kubenswrapper[27820]: I0320 08:49:50.034676 27820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 08:49:50.034762 master-0 kubenswrapper[27820]: I0320 08:49:50.034674 27820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-21 08:25:28 +0000 UTC, rotation deadline is 2026-03-21 03:46:34.970209567 +0000 UTC Mar 20 08:49:50.034798 master-0 kubenswrapper[27820]: I0320 08:49:50.034762 27820 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h56m44.935452754s for next certificate rotation Mar 20 08:49:50.034970 master-0 kubenswrapper[27820]: I0320 08:49:50.034948 27820 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 20 08:49:50.035048 master-0 kubenswrapper[27820]: I0320 08:49:50.034972 27820 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 08:49:50.035403 master-0 kubenswrapper[27820]: I0320 08:49:50.035158 27820 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 20 08:49:50.035883 master-0 kubenswrapper[27820]: I0320 08:49:50.035771 27820 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 08:49:50.035883 master-0 kubenswrapper[27820]: I0320 08:49:50.035804 27820 factory.go:55] Registering systemd factory Mar 20 08:49:50.035883 master-0 kubenswrapper[27820]: I0320 08:49:50.035815 27820 factory.go:221] Registration of the systemd container factory successfully Mar 20 08:49:50.037040 master-0 kubenswrapper[27820]: I0320 08:49:50.037009 27820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 08:49:50.038658 master-0 kubenswrapper[27820]: E0320 08:49:50.038588 27820 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 20 08:49:50.038789 master-0 kubenswrapper[27820]: I0320 08:49:50.038756 27820 factory.go:153] Registering CRI-O factory Mar 20 08:49:50.038848 master-0 kubenswrapper[27820]: I0320 08:49:50.038798 27820 factory.go:221] Registration of the crio container factory successfully Mar 20 08:49:50.038848 master-0 kubenswrapper[27820]: I0320 08:49:50.038840 27820 factory.go:103] Registering Raw factory Mar 20 08:49:50.038910 master-0 kubenswrapper[27820]: I0320 08:49:50.038865 27820 manager.go:1196] Started watching for new ooms in manager Mar 20 08:49:50.041069 master-0 kubenswrapper[27820]: I0320 08:49:50.041032 27820 manager.go:319] Starting recovery of all containers Mar 20 08:49:50.061173 master-0 kubenswrapper[27820]: I0320 08:49:50.061039 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80ddf0a4-e853-4de0-b540-81144dfdd31d" volumeName="kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.061173 master-0 kubenswrapper[27820]: I0320 08:49:50.061133 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ad95adc-2e0f-4e95-94e7-66e6d240a930" volumeName="kubernetes.io/projected/0ad95adc-2e0f-4e95-94e7-66e6d240a930-kube-api-access-5lnpz" seLinuxMountContext="" Mar 20 08:49:50.061173 master-0 kubenswrapper[27820]: I0320 08:49:50.061162 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ff930f-ec0d-40ed-a879-1546691f685d" volumeName="kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.061462 master-0 kubenswrapper[27820]: I0320 08:49:50.061186 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d26f719-43b9-4c1c-9a54-ff800177db68" volumeName="kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.061462 master-0 kubenswrapper[27820]: I0320 08:49:50.061208 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d62448d-55f1-4bdc-85aa-09e7bdf766cc" volumeName="kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.061654 master-0 kubenswrapper[27820]: I0320 08:49:50.061569 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ad95adc-2e0f-4e95-94e7-66e6d240a930" volumeName="kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.061654 master-0 kubenswrapper[27820]: I0320 08:49:50.061601 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides" seLinuxMountContext="" Mar 20 08:49:50.061654 master-0 kubenswrapper[27820]: I0320 08:49:50.061619 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14ef046f-b284-457f-ad7a-b7958cb82dd5" volumeName="kubernetes.io/secret/14ef046f-b284-457f-ad7a-b7958cb82dd5-tls-certificates" seLinuxMountContext="" Mar 20 08:49:50.061654 master-0 kubenswrapper[27820]: I0320 08:49:50.061639 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca" seLinuxMountContext="" Mar 20 08:49:50.061813 master-0 kubenswrapper[27820]: I0320 08:49:50.061658 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6163bd4b-dc83-4e83-8590-5ac4753bda1c" volumeName="kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images" seLinuxMountContext="" Mar 20 08:49:50.061813 master-0 kubenswrapper[27820]: I0320 08:49:50.061676 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="890a6c24-1dbb-4331-952b-5712ac00788e" volumeName="kubernetes.io/projected/890a6c24-1dbb-4331-952b-5712ac00788e-kube-api-access-7bxn6" seLinuxMountContext="" Mar 20 08:49:50.061813 master-0 kubenswrapper[27820]: I0320 08:49:50.061767 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047" volumeName="kubernetes.io/projected/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047-kube-api-access-ncztx" seLinuxMountContext="" Mar 20 08:49:50.061813 master-0 kubenswrapper[27820]: I0320 08:49:50.061786 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" volumeName="kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.061813 master-0 kubenswrapper[27820]: I0320 08:49:50.061810 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97ad1db7-0bf9-4faf-9fa5-0f3df7dab777" volumeName="kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-tmp" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061827 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061850 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6163bd4b-dc83-4e83-8590-5ac4753bda1c" volumeName="kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061867 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-audit-policies" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061884 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a86af6a2-55a9-4c4e-8caf-1f51fedb23f5" volumeName="kubernetes.io/projected/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-kube-api-access-x82xz" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061903 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-trusted-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061921 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ab32efc-7cc5-4e36-9c1c-05efb19914e2" volumeName="kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061938 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bca4cc7c-839d-4877-b0aa-c07607fea404" volumeName="kubernetes.io/secret/bca4cc7c-839d-4877-b0aa-c07607fea404-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.061990 master-0 kubenswrapper[27820]: I0320 08:49:50.061992 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/projected/ca56e37d-80ea-432b-a6d9-f4e904a40e10-kube-api-access-jxqp4" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062011 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9425526-9f51-4302-a19d-a8107f56c582" volumeName="kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062027 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" volumeName="kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062043 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6163bd4b-dc83-4e83-8590-5ac4753bda1c" volumeName="kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062092 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6163bd4b-dc83-4e83-8590-5ac4753bda1c" volumeName="kubernetes.io/projected/6163bd4b-dc83-4e83-8590-5ac4753bda1c-kube-api-access-zbzl9" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062122 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d26f719-43b9-4c1c-9a54-ff800177db68" volumeName="kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062140 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc" volumeName="kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062159 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="64d09f81-5fb6-462a-a736-5649779a6b1a" volumeName="kubernetes.io/projected/64d09f81-5fb6-462a-a736-5649779a6b1a-kube-api-access-7w8xs" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062176 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6a6e991-c861-48f5-bfde-78762a037343" volumeName="kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062197 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff2dfe9d-2834-43cb-b093-0831b2b87131" volumeName="kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062215 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062233 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ff930f-ec0d-40ed-a879-1546691f685d" volumeName="kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062248 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062269 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" volumeName="kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062306 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ad95adc-2e0f-4e95-94e7-66e6d240a930" volumeName="kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062324 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-trusted-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062345 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc" volumeName="kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062365 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71ca96e8-5108-455c-bb3c-17977d38e912" volumeName="kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access" seLinuxMountContext="" Mar 20 08:49:50.062355 master-0 kubenswrapper[27820]: I0320 08:49:50.062384 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b097596e-79e1-44d1-be8a-96340042a041" volumeName="kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062406 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0" volumeName="kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062427 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062445 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="240ba61a-e439-4f94-b9b3-7903b9b1bc05" volumeName="kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062461 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41ac891d-b41d-43c4-be46-35f39671477a" volumeName="kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062478 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45b3c788-eb83-448a-bc60-90b8ace28382" volumeName="kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062496 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a86af6a2-55a9-4c4e-8caf-1f51fedb23f5" volumeName="kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062510 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89571b2-098c-495b-9b53-c4ebd95296ab" volumeName="kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-default-certificate" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062529 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062544 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062562 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1746482a-d1a3-4eac-8bc9-643b6af75163" volumeName="kubernetes.io/secret/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-key" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062579 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d125bc5-08ce-434a-bde7-0ba8fc0169ea" volumeName="kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062596 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" volumeName="kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062621 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44bc88d8-9e01-4521-a704-85d9ca095baa" volumeName="kubernetes.io/projected/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-api-access-hdqzn" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062639 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" volumeName="kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062659 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a88b1c81-02b5-4c85-9660-5f84c900a946" volumeName="kubernetes.io/projected/a88b1c81-02b5-4c85-9660-5f84c900a946-kube-api-access-5zf6h" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062676 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062694 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb6d987-4b59-4fd9-889a-3250c12a726c" volumeName="kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-webhook-cert" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062711 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062731 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt" seLinuxMountContext="" Mar 20 08:49:50.062907 master-0 kubenswrapper[27820]: I0320 08:49:50.062748 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" volumeName="kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-kube-api-access-l4w7k" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.062768 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41253bde-5d09-4ff0-8e7c-4a21fe2b7106" volumeName="kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063158 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5707066a-bd66-41bc-8cea-cff1630ab5ee" volumeName="kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063179 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b097596e-79e1-44d1-be8a-96340042a041" volumeName="kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063196 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063218 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/projected/6a6a187d-5b25-4d63-939e-c04e07369371-kube-api-access-br4bc" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063238 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d26f719-43b9-4c1c-9a54-ff800177db68" volumeName="kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063261 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063316 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d" volumeName="kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063425 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063444 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" volumeName="kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063471 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc" volumeName="kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063491 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80ddf0a4-e853-4de0-b540-81144dfdd31d" volumeName="kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063506 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-config" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063524 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0" volumeName="kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063543 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="123f1ecb-cc03-462b-b76f-7251bf69d3d6" volumeName="kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063562 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063579 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="240ba61a-e439-4f94-b9b3-7903b9b1bc05" volumeName="kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063597 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71ca96e8-5108-455c-bb3c-17977d38e912" volumeName="kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063614 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44bc88d8-9e01-4521-a704-85d9ca095baa" volumeName="kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063630 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44bc88d8-9e01-4521-a704-85d9ca095baa" volumeName="kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063647 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-client" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063663 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00350ac7-b40a-4459-b94c-a37d7b613645" volumeName="kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063679 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ad95adc-2e0f-4e95-94e7-66e6d240a930" volumeName="kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063696 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="123f1ecb-cc03-462b-b76f-7251bf69d3d6" volumeName="kubernetes.io/projected/123f1ecb-cc03-462b-b76f-7251bf69d3d6-kube-api-access-dtt44" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063715 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063732 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="64d09f81-5fb6-462a-a736-5649779a6b1a" volumeName="kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-utilities" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063752 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f725c4a-234c-44e9-95f2-73f31d2b0fd3" volumeName="kubernetes.io/projected/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-kube-api-access-r22fm" seLinuxMountContext="" Mar 20 08:49:50.063699 master-0 kubenswrapper[27820]: I0320 08:49:50.063773 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="123f1ecb-cc03-462b-b76f-7251bf69d3d6" volumeName="kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063795 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3065e4b4-4493-41ce-b9d2-89315475f74f" volumeName="kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063810 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44bc88d8-9e01-4521-a704-85d9ca095baa" volumeName="kubernetes.io/empty-dir/44bc88d8-9e01-4521-a704-85d9ca095baa-volume-directive-shadow" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063829 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc" volumeName="kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063855 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7ab32efc-7cc5-4e36-9c1c-05efb19914e2" volumeName="kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063870 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063885 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-image-import-ca" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063899 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="240ba61a-e439-4f94-b9b3-7903b9b1bc05" volumeName="kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063913 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" volumeName="kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063928 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41253bde-5d09-4ff0-8e7c-4a21fe2b7106" volumeName="kubernetes.io/projected/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-kube-api-access-dbtnq" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063947 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44bc88d8-9e01-4521-a704-85d9ca095baa" volumeName="kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063962 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063978 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="123f1ecb-cc03-462b-b76f-7251bf69d3d6" volumeName="kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.063998 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-serving-ca" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064016 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89571b2-098c-495b-9b53-c4ebd95296ab" volumeName="kubernetes.io/projected/e89571b2-098c-495b-9b53-c4ebd95296ab-kube-api-access-pw6sv" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064035 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="00350ac7-b40a-4459-b94c-a37d7b613645" volumeName="kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064090 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08d9196b-b68f-421b-8754-bfbaa4020a97" volumeName="kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064129 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064149 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89571b2-098c-495b-9b53-c4ebd95296ab" volumeName="kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-metrics-certs" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064170 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9425526-9f51-4302-a19d-a8107f56c582" volumeName="kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064193 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d125bc5-08ce-434a-bde7-0ba8fc0169ea" volumeName="kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064213 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064231 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" volumeName="kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064250 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" volumeName="kubernetes.io/projected/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-kube-api-access-4vm9c" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064304 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08d9196b-b68f-421b-8754-bfbaa4020a97" volumeName="kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064328 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f6c819a-5074-4d29-84c8-e187528ad757" volumeName="kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-utilities" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064347 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" volumeName="kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-utilities" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064367 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064398 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74bebf0b-6727-4959-8239-a9389e630524" volumeName="kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064420 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9817d1ec-3d7c-49fb-8e41-26f5727ef9e8" volumeName="kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064439 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064457 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" volumeName="kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-utilities" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064481 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0" volumeName="kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064504 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bca4cc7c-839d-4877-b0aa-c07607fea404" volumeName="kubernetes.io/configmap/bca4cc7c-839d-4877-b0aa-c07607fea404-service-ca" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064526 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6a6e991-c861-48f5-bfde-78762a037343" volumeName="kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064548 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064565 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ff2dfe9d-2834-43cb-b093-0831b2b87131" volumeName="kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064584 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" volumeName="kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064610 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41ac891d-b41d-43c4-be46-35f39671477a" volumeName="kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064630 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5707066a-bd66-41bc-8cea-cff1630ab5ee" volumeName="kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064775 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d62448d-55f1-4bdc-85aa-09e7bdf766cc" volumeName="kubernetes.io/projected/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-kube-api-access-n9mbs" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064796 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45b3c788-eb83-448a-bc60-90b8ace28382" volumeName="kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064817 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" volumeName="kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064834 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="64d09f81-5fb6-462a-a736-5649779a6b1a" volumeName="kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-catalog-content" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064854 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74bebf0b-6727-4959-8239-a9389e630524" volumeName="kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064871 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064889 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23003a2f-2053-47cc-8133-23eb886d4da0" volumeName="kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064916 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6610936-e14a-4532-955c-ea1ee4222259" volumeName="kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls" seLinuxMountContext="" Mar 20 08:49:50.065045 master-0 kubenswrapper[27820]: I0320 08:49:50.064938 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065290 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" volumeName="kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065326 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5707066a-bd66-41bc-8cea-cff1630ab5ee" volumeName="kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065353 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97ad1db7-0bf9-4faf-9fa5-0f3df7dab777" volumeName="kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-tuned" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065375 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08d9196b-b68f-421b-8754-bfbaa4020a97" volumeName="kubernetes.io/empty-dir/08d9196b-b68f-421b-8754-bfbaa4020a97-cache" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065399 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e79950f-50a5-46ec-b836-7a35dcce2851" volumeName="kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065416 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" volumeName="kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-catalog-content" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065505 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="240ba61a-e439-4f94-b9b3-7903b9b1bc05" volumeName="kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065527 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9c0293a-5340-4ebe-bc8f-43e78ba9f280" volumeName="kubernetes.io/projected/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-kube-api-access-ns97v" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065548 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d62448d-55f1-4bdc-85aa-09e7bdf766cc" volumeName="kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065568 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065586 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065603 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="45b3c788-eb83-448a-bc60-90b8ace28382" volumeName="kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065625 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f6c819a-5074-4d29-84c8-e187528ad757" volumeName="kubernetes.io/projected/4f6c819a-5074-4d29-84c8-e187528ad757-kube-api-access-mm9l9" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065645 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581a8be2-d16c-4fd8-b051-214bd60a2a91" volumeName="kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065663 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065681 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3065e4b4-4493-41ce-b9d2-89315475f74f" volumeName="kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065699 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89571b2-098c-495b-9b53-c4ebd95296ab" volumeName="kubernetes.io/configmap/e89571b2-098c-495b-9b53-c4ebd95296ab-service-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065723 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065747 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1746482a-d1a3-4eac-8bc9-643b6af75163" volumeName="kubernetes.io/projected/1746482a-d1a3-4eac-8bc9-643b6af75163-kube-api-access-2dkqm" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065765 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d" volumeName="kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065784 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-encryption-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065803 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065822 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bca4cc7c-839d-4877-b0aa-c07607fea404" volumeName="kubernetes.io/projected/bca4cc7c-839d-4877-b0aa-c07607fea404-kube-api-access" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065838 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1746482a-d1a3-4eac-8bc9-643b6af75163" volumeName="kubernetes.io/configmap/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-cabundle" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065857 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" volumeName="kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065875 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7949621e-4da6-4e43-a1f3-2ef303bf6aa6" volumeName="kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065894 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6610936-e14a-4532-955c-ea1ee4222259" volumeName="kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065914 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb6d987-4b59-4fd9-889a-3250c12a726c" volumeName="kubernetes.io/empty-dir/0cb6d987-4b59-4fd9-889a-3250c12a726c-tmpfs" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065933 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41ac891d-b41d-43c4-be46-35f39671477a" volumeName="kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065949 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" volumeName="kubernetes.io/projected/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-kube-api-access-zmssd" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065970 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf" volumeName="kubernetes.io/projected/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-kube-api-access-fvxjl" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.065990 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71ca96e8-5108-455c-bb3c-17977d38e912" volumeName="kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066007 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" volumeName="kubernetes.io/empty-dir/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-cache" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066023 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f6a6e991-c861-48f5-bfde-78762a037343" volumeName="kubernetes.io/projected/f6a6e991-c861-48f5-bfde-78762a037343-kube-api-access-rf9kc" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066037 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ff930f-ec0d-40ed-a879-1546691f685d" volumeName="kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066052 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066066 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f6c819a-5074-4d29-84c8-e187528ad757" volumeName="kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-catalog-content" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066080 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-serving-ca" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066092 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56970553-2ac8-4cb5-a12a-b7c1e777c587" volumeName="kubernetes.io/projected/56970553-2ac8-4cb5-a12a-b7c1e777c587-kube-api-access-zrbnx" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066109 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066126 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0" volumeName="kubernetes.io/projected/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-kube-api-access-tfgfz" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066145 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e89571b2-098c-495b-9b53-c4ebd95296ab" volumeName="kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-stats-auth" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066161 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0e79950f-50a5-46ec-b836-7a35dcce2851" volumeName="kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066177 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3065e4b4-4493-41ce-b9d2-89315475f74f" volumeName="kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066195 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581a8be2-d16c-4fd8-b051-214bd60a2a91" volumeName="kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066214 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" volumeName="kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-ca-certs" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066232 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066254 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9c0293a-5340-4ebe-bc8f-43e78ba9f280" volumeName="kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066346 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d26f719-43b9-4c1c-9a54-ff800177db68" volumeName="kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066363 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" volumeName="kubernetes.io/projected/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9-kube-api-access-rqgkl" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066380 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="acbaba45-12d9-40b9-818c-4b091d7929b1" volumeName="kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066396 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb6d987-4b59-4fd9-889a-3250c12a726c" volumeName="kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-apiservice-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066409 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc" volumeName="kubernetes.io/projected/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-kube-api-access-hlgd7" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066423 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44bc88d8-9e01-4521-a704-85d9ca095baa" volumeName="kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066436 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc" volumeName="kubernetes.io/projected/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-kube-api-access-mmk45" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066449 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f725c4a-234c-44e9-95f2-73f31d2b0fd3" volumeName="kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066463 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41253bde-5d09-4ff0-8e7c-4a21fe2b7106" volumeName="kubernetes.io/configmap/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-config-volume" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066475 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a88b1c81-02b5-4c85-9660-5f84c900a946" volumeName="kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066491 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d62448d-55f1-4bdc-85aa-09e7bdf766cc" volumeName="kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066510 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ce482dc-d0ac-40bc-9058-a1cfdc81575e" volumeName="kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066526 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9817d1ec-3d7c-49fb-8e41-26f5727ef9e8" volumeName="kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066539 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca6e644f-c53b-41dd-a16f-9fb9997533dd" volumeName="kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066553 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210dd7f0-d1c0-407a-b89b-f11ef605e5df" volumeName="kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066567 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41ac891d-b41d-43c4-be46-35f39671477a" volumeName="kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066593 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-client" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066611 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7949621e-4da6-4e43-a1f3-2ef303bf6aa6" volumeName="kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066629 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0f725c4a-234c-44e9-95f2-73f31d2b0fd3" volumeName="kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066643 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-encryption-config" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066669 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066684 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" volumeName="kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066699 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066715 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066740 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f202273a-b111-46ce-b404-7e481d2c7ff9" volumeName="kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066756 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7949621e-4da6-4e43-a1f3-2ef303bf6aa6" volumeName="kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066775 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9ce482dc-d0ac-40bc-9058-a1cfdc81575e" volumeName="kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066792 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles" seLinuxMountContext="" Mar 20 08:49:50.066760 master-0 kubenswrapper[27820]: I0320 08:49:50.066808 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23003a2f-2053-47cc-8133-23eb886d4da0" volumeName="kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067070 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581a8be2-d16c-4fd8-b051-214bd60a2a91" volumeName="kubernetes.io/projected/581a8be2-d16c-4fd8-b051-214bd60a2a91-kube-api-access-ssmph" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067093 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d62448d-55f1-4bdc-85aa-09e7bdf766cc" volumeName="kubernetes.io/empty-dir/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-snapshots" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067108 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067125 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80ddf0a4-e853-4de0-b540-81144dfdd31d" volumeName="kubernetes.io/projected/80ddf0a4-e853-4de0-b540-81144dfdd31d-kube-api-access-pgffp" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067140 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="08d9196b-b68f-421b-8754-bfbaa4020a97" volumeName="kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-kube-api-access-tvqv5" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067156 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="41ac891d-b41d-43c4-be46-35f39671477a" volumeName="kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067170 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d653bfa-7168-49fa-a838-aedb33c7e60f" volumeName="kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067184 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e9425526-9f51-4302-a19d-a8107f56c582" volumeName="kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067198 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cb6d987-4b59-4fd9-889a-3250c12a726c" volumeName="kubernetes.io/projected/0cb6d987-4b59-4fd9-889a-3250c12a726c-kube-api-access-v29ws" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067214 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22f85e98-eb36-46b2-ab5d-7c21e060cba5" volumeName="kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067228 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56970553-2ac8-4cb5-a12a-b7c1e777c587" volumeName="kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067242 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" volumeName="kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-catalog-content" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067257 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a6a187d-5b25-4d63-939e-c04e07369371" volumeName="kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-serving-cert" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067354 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80ddf0a4-e853-4de0-b540-81144dfdd31d" volumeName="kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067471 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6610936-e14a-4532-955c-ea1ee4222259" volumeName="kubernetes.io/projected/b6610936-e14a-4532-955c-ea1ee4222259-kube-api-access-v8plf" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067497 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" volumeName="kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067516 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d" volumeName="kubernetes.io/projected/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-kube-api-access-btwhr" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067531 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067545 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d125bc5-08ce-434a-bde7-0ba8fc0169ea" volumeName="kubernetes.io/projected/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-kube-api-access-hmb9v" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067559 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57189f7c-5987-457d-a299-0a6b9bcb3e24" volumeName="kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067573 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" volumeName="kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067588 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" volumeName="kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067604 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="04466971-127b-403e-af45-dad97b6e0c87" volumeName="kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067617 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="123f1ecb-cc03-462b-b76f-7251bf69d3d6" volumeName="kubernetes.io/empty-dir/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-textfile" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067630 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" volumeName="kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067646 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca56e37d-80ea-432b-a6d9-f4e904a40e10" volumeName="kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067660 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23003a2f-2053-47cc-8133-23eb886d4da0" volumeName="kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067673 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fec3170d-3f3e-42f5-b20a-da53721c0dac" volumeName="kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067687 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22ff82cf-0d7d-4955-9b7c-97757acbc021" volumeName="kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067702 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="97ad1db7-0bf9-4faf-9fa5-0f3df7dab777" volumeName="kubernetes.io/projected/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-kube-api-access-w5wnd" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067716 27820 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6610936-e14a-4532-955c-ea1ee4222259" volumeName="kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images" seLinuxMountContext="" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067921 27820 reconstruct.go:97] "Volume reconstruction finished" Mar 20 08:49:50.069511 master-0 kubenswrapper[27820]: I0320 08:49:50.067931 27820 reconciler.go:26] "Reconciler: start to sync state" Mar 20 08:49:50.073230 master-0 kubenswrapper[27820]: I0320 08:49:50.071710 27820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 08:49:50.073230 master-0 kubenswrapper[27820]: I0320 08:49:50.071814 27820 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 20 08:49:50.073707 master-0 kubenswrapper[27820]: I0320 08:49:50.073654 27820 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 08:49:50.074205 master-0 kubenswrapper[27820]: I0320 08:49:50.073711 27820 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 08:49:50.074205 master-0 kubenswrapper[27820]: I0320 08:49:50.073745 27820 kubelet.go:2335] "Starting kubelet main sync loop" Mar 20 08:49:50.074205 master-0 kubenswrapper[27820]: E0320 08:49:50.074018 27820 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 08:49:50.077463 master-0 kubenswrapper[27820]: I0320 08:49:50.077094 27820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 08:49:50.090013 master-0 kubenswrapper[27820]: I0320 08:49:50.089909 27820 generic.go:334] "Generic (PLEG): container finished" podID="6a6a187d-5b25-4d63-939e-c04e07369371" containerID="8e319f4d734fd58b9a56147a1a2739f2e0bba0c55aaa97507d13bc7de8bfc3f1" exitCode=0 Mar 20 08:49:50.092547 master-0 kubenswrapper[27820]: I0320 08:49:50.092404 27820 generic.go:334] "Generic (PLEG): container finished" podID="26923e70-56a5-4020-8b55-510879ec6fd4" containerID="4efa2d7ff0f9f10f26d4d217feeb2ea6ecccefb675bc71c18faa7c5fe6db33c6" exitCode=0 Mar 20 08:49:50.096938 master-0 kubenswrapper[27820]: I0320 08:49:50.096898 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/4.log" Mar 20 08:49:50.097445 master-0 kubenswrapper[27820]: I0320 08:49:50.097374 27820 generic.go:334] "Generic (PLEG): container finished" podID="22f85e98-eb36-46b2-ab5d-7c21e060cba5" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" exitCode=1 Mar 20 08:49:50.101125 master-0 kubenswrapper[27820]: I0320 08:49:50.101098 27820 generic.go:334] "Generic (PLEG): container finished" podID="71ca96e8-5108-455c-bb3c-17977d38e912" containerID="f61b725a79fff556468b0126e41778d167b8a31ec8526a9c664ab434b3c33c45" exitCode=0 Mar 20 08:49:50.103982 master-0 kubenswrapper[27820]: I0320 08:49:50.103935 27820 generic.go:334] "Generic (PLEG): container finished" podID="210dd7f0-d1c0-407a-b89b-f11ef605e5df" containerID="80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f" exitCode=0 Mar 20 08:49:50.112977 master-0 kubenswrapper[27820]: I0320 08:49:50.112951 27820 generic.go:334] "Generic (PLEG): container finished" podID="521086da-d513-4475-8db5-098ab9838df1" containerID="35c674a122271104b677e9d9fd6224e868e82108125b554a6b281e82916a6b0b" exitCode=0 Mar 20 08:49:50.115495 master-0 kubenswrapper[27820]: I0320 08:49:50.115460 27820 generic.go:334] "Generic (PLEG): container finished" podID="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" containerID="c390ada5286d8adbcd2f8c4da2b3fb1c764bd2a56eb30ce5a1fc2fc1a428f30e" exitCode=0 Mar 20 08:49:50.115595 master-0 kubenswrapper[27820]: I0320 08:49:50.115497 27820 generic.go:334] "Generic (PLEG): container finished" podID="9635cdae-0983-4c97-b3ed-dc7a785b1bb6" containerID="c9a695d4652da7db7f3ebcef0da143cf28a9dbbbb25aee4013a1e44bb00f1e39" exitCode=0 Mar 20 08:49:50.118337 master-0 kubenswrapper[27820]: I0320 08:49:50.118310 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/0.log" Mar 20 08:49:50.118958 master-0 kubenswrapper[27820]: I0320 08:49:50.118931 27820 generic.go:334] "Generic (PLEG): container finished" podID="2d125bc5-08ce-434a-bde7-0ba8fc0169ea" containerID="7c71ba6860012685e763d6be0a28f9f4eedf51541e431293b43883fadda65c94" exitCode=255 Mar 20 08:49:50.122074 master-0 kubenswrapper[27820]: I0320 08:49:50.122044 27820 generic.go:334] "Generic (PLEG): container finished" podID="8413125cf444e5c95f023c5dd9c6151e" containerID="977167918f7e6bd33389cf095bf0a1f6441c8367a8bb9ad4ad8439f4003209b0" exitCode=0 Mar 20 08:49:50.128434 master-0 kubenswrapper[27820]: I0320 08:49:50.128395 27820 generic.go:334] "Generic (PLEG): container finished" podID="4f6c819a-5074-4d29-84c8-e187528ad757" containerID="cf84a262e3cc737c426a3ee34816aa6cd8e8defa929f970e838849ec973bd55a" exitCode=0 Mar 20 08:49:50.128434 master-0 kubenswrapper[27820]: I0320 08:49:50.128429 27820 generic.go:334] "Generic (PLEG): container finished" podID="4f6c819a-5074-4d29-84c8-e187528ad757" containerID="b058c3dbb12dfe93f678a1cd234084a98f5f906462ebd3bf89f71382d647769f" exitCode=0 Mar 20 08:49:50.142286 master-0 kubenswrapper[27820]: I0320 08:49:50.142241 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/3.log" Mar 20 08:49:50.142371 master-0 kubenswrapper[27820]: I0320 08:49:50.142322 27820 generic.go:334] "Generic (PLEG): container finished" podID="a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9" containerID="43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e" exitCode=1 Mar 20 08:49:50.149797 master-0 kubenswrapper[27820]: I0320 08:49:50.149755 27820 generic.go:334] "Generic (PLEG): container finished" podID="d26a4fce-8eed-44d0-96a3-40ffd0b336a6" containerID="c61822f24caad65a896a136b258da1c07b65503ea37e7992a32f53bc007f40ea" exitCode=0 Mar 20 08:49:50.151272 master-0 kubenswrapper[27820]: I0320 08:49:50.151240 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/2.log" Mar 20 08:49:50.151501 master-0 kubenswrapper[27820]: I0320 08:49:50.151481 27820 generic.go:334] "Generic (PLEG): container finished" podID="f202273a-b111-46ce-b404-7e481d2c7ff9" containerID="052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3" exitCode=1 Mar 20 08:49:50.154287 master-0 kubenswrapper[27820]: I0320 08:49:50.154233 27820 generic.go:334] "Generic (PLEG): container finished" podID="20ff930f-ec0d-40ed-a879-1546691f685d" containerID="d20a1459d97b6e06a1f2acdb938648d68b1fc12871ed4ca115c971b404c404f0" exitCode=0 Mar 20 08:49:50.157670 master-0 kubenswrapper[27820]: I0320 08:49:50.157652 27820 generic.go:334] "Generic (PLEG): container finished" podID="65157a9b-3df7-4cc1-a85a-a5dfa59921ad" containerID="59a8653cd7835805f3353ca3030def7794cc3d5df739fff211964fc11ce38845" exitCode=0 Mar 20 08:49:50.161567 master-0 kubenswrapper[27820]: I0320 08:49:50.161537 27820 generic.go:334] "Generic (PLEG): container finished" podID="23003a2f-2053-47cc-8133-23eb886d4da0" containerID="a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb" exitCode=0 Mar 20 08:49:50.163184 master-0 kubenswrapper[27820]: I0320 08:49:50.162827 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_cce21ae1-63de-49be-a027-084a101e650b/installer/0.log" Mar 20 08:49:50.163184 master-0 kubenswrapper[27820]: I0320 08:49:50.162866 27820 generic.go:334] "Generic (PLEG): container finished" podID="cce21ae1-63de-49be-a027-084a101e650b" containerID="08b76c47992e775acd809c6af275e2c7e9a0096419764ac5862de8d43565af46" exitCode=1 Mar 20 08:49:50.165135 master-0 kubenswrapper[27820]: I0320 08:49:50.165115 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_fae0c983-2cb4-4749-97ff-a718a9fb6563/installer/0.log" Mar 20 08:49:50.165184 master-0 kubenswrapper[27820]: I0320 08:49:50.165146 27820 generic.go:334] "Generic (PLEG): container finished" podID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerID="8db9b6351ac69b67c8e87136c1df3fa9a0513a97038d7ea0f58a226f57e933df" exitCode=1 Mar 20 08:49:50.166656 master-0 kubenswrapper[27820]: I0320 08:49:50.166639 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_84b1b51a-cbfa-42de-9fb8-315e9cb76b58/installer/0.log" Mar 20 08:49:50.166734 master-0 kubenswrapper[27820]: I0320 08:49:50.166667 27820 generic.go:334] "Generic (PLEG): container finished" podID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerID="9195f1dfc14cd53890895128ba6b2082162a13670d2ec403d7a28c0918592666" exitCode=1 Mar 20 08:49:50.169636 master-0 kubenswrapper[27820]: I0320 08:49:50.169603 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-57777556ff-rnnfz_bb7b640f-22be-41a9-8ab2-e7ae817e2eb0/manager/1.log" Mar 20 08:49:50.169934 master-0 kubenswrapper[27820]: I0320 08:49:50.169899 27820 generic.go:334] "Generic (PLEG): container finished" podID="bb7b640f-22be-41a9-8ab2-e7ae817e2eb0" containerID="fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b" exitCode=1 Mar 20 08:49:50.171657 master-0 kubenswrapper[27820]: I0320 08:49:50.171624 27820 generic.go:334] "Generic (PLEG): container finished" podID="ca56e37d-80ea-432b-a6d9-f4e904a40e10" containerID="3d7b06fc76103946132a85d04845bb83f54fb34b66bfd2a1c6aa9a2bee7fdecc" exitCode=0 Mar 20 08:49:50.173769 master-0 kubenswrapper[27820]: I0320 08:49:50.173747 27820 generic.go:334] "Generic (PLEG): container finished" podID="c83737980b9ee109184b1d78e942cf36" containerID="44e6488658001ec197750deb888ad4cc53ef741359268344dae6149df1e9b900" exitCode=0 Mar 20 08:49:50.174093 master-0 kubenswrapper[27820]: E0320 08:49:50.174071 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:49:50.176686 master-0 kubenswrapper[27820]: I0320 08:49:50.176643 27820 generic.go:334] "Generic (PLEG): container finished" podID="74bebf0b-6727-4959-8239-a9389e630524" containerID="c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9" exitCode=0 Mar 20 08:49:50.180236 master-0 kubenswrapper[27820]: I0320 08:49:50.180217 27820 generic.go:334] "Generic (PLEG): container finished" podID="f6a6e991-c861-48f5-bfde-78762a037343" containerID="ae2ac69f50f92b147c6e0e54b3efd3a8fd07b958b39ed5539b1432cc17005897" exitCode=0 Mar 20 08:49:50.184441 master-0 kubenswrapper[27820]: I0320 08:49:50.184400 27820 generic.go:334] "Generic (PLEG): container finished" podID="64d09f81-5fb6-462a-a736-5649779a6b1a" containerID="1e83bbe7ff1cdd771e7b861105c79c9f038ba7c1e62e6423e1143134dfc130c3" exitCode=0 Mar 20 08:49:50.184529 master-0 kubenswrapper[27820]: I0320 08:49:50.184438 27820 generic.go:334] "Generic (PLEG): container finished" podID="64d09f81-5fb6-462a-a736-5649779a6b1a" containerID="91b43fcbdd5ca279c1c93dfa907a3ddb56ecb16c22e3a3346f458ab45ff2c368" exitCode=0 Mar 20 08:49:50.185969 master-0 kubenswrapper[27820]: I0320 08:49:50.185944 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_5cdd5ac8-4c2e-4680-b697-0e5d94136fe4/installer/0.log" Mar 20 08:49:50.186012 master-0 kubenswrapper[27820]: I0320 08:49:50.185982 27820 generic.go:334] "Generic (PLEG): container finished" podID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerID="6431ba0942f1d93ec67e79edabc01c308dcb065395ccf7185622d3bd7f0075b2" exitCode=1 Mar 20 08:49:50.195336 master-0 kubenswrapper[27820]: I0320 08:49:50.195305 27820 generic.go:334] "Generic (PLEG): container finished" podID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerID="9c4160ccfce4a1ed7d4a8b39bc1968845b7b8a2ab8792b3e93cfa7765e5fa689" exitCode=0 Mar 20 08:49:50.201440 master-0 kubenswrapper[27820]: I0320 08:49:50.201398 27820 generic.go:334] "Generic (PLEG): container finished" podID="75cef5aa-93e6-4b8b-9ab1-06809e85883a" containerID="dc68fd475ff9f6055eceb076d1b60266600d047f4d29a9bd68c9771cc87efbc5" exitCode=0 Mar 20 08:49:50.205021 master-0 kubenswrapper[27820]: I0320 08:49:50.205003 27820 generic.go:334] "Generic (PLEG): container finished" podID="fec3170d-3f3e-42f5-b20a-da53721c0dac" containerID="36c39e8f2f6bf69b1f66f4972d7671c5d3fca0023fda940a2b8538766e8e200d" exitCode=0 Mar 20 08:49:50.207153 master-0 kubenswrapper[27820]: I0320 08:49:50.207126 27820 generic.go:334] "Generic (PLEG): container finished" podID="e9c0293a-5340-4ebe-bc8f-43e78ba9f280" containerID="a7ec1ed13e0a355d823b781b053862cbbd8f7a00b211a40b600daee7dc545186" exitCode=0 Mar 20 08:49:50.212159 master-0 kubenswrapper[27820]: I0320 08:49:50.212119 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/1.log" Mar 20 08:49:50.212520 master-0 kubenswrapper[27820]: I0320 08:49:50.212487 27820 generic.go:334] "Generic (PLEG): container finished" podID="08d9196b-b68f-421b-8754-bfbaa4020a97" containerID="ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449" exitCode=1 Mar 20 08:49:50.217047 master-0 kubenswrapper[27820]: I0320 08:49:50.217024 27820 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="66935bc88a172084ce89ee3474a8817878b895f87e27bbd9f994bbea54a28d58" exitCode=0 Mar 20 08:49:50.217047 master-0 kubenswrapper[27820]: I0320 08:49:50.217042 27820 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="1ad464d19cae2361db03cbce68a3a46d3a3a7e57495ff1c59b795128f430f3c3" exitCode=0 Mar 20 08:49:50.217137 master-0 kubenswrapper[27820]: I0320 08:49:50.217050 27820 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="633e246d0eb69524c4e825553d8b2a17d7166e97b618f96a41148d7625aa5ed0" exitCode=0 Mar 20 08:49:50.217137 master-0 kubenswrapper[27820]: I0320 08:49:50.217059 27820 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="49a024c7c79250dd61c634f6e633e0edd247a3c463686f54208b638a2fd19ebb" exitCode=0 Mar 20 08:49:50.217137 master-0 kubenswrapper[27820]: I0320 08:49:50.217065 27820 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="e286a3213c5346d10ff0d6cbc953c4d1baa37806e4134a08a01aa0b21b03e73b" exitCode=0 Mar 20 08:49:50.217137 master-0 kubenswrapper[27820]: I0320 08:49:50.217074 27820 generic.go:334] "Generic (PLEG): container finished" podID="22ff82cf-0d7d-4955-9b7c-97757acbc021" containerID="40ff7a57f1be617cf7f13a7b182aa09a2d94c4736efa61da1185a107268ed08d" exitCode=0 Mar 20 08:49:50.218511 master-0 kubenswrapper[27820]: I0320 08:49:50.218471 27820 generic.go:334] "Generic (PLEG): container finished" podID="9775cc27-53b9-4d21-a98b-84b39ada32ee" containerID="8b5711cce3fb17d8c5298b374ea763f137a6631ab7f8f0ff687f48b345639df0" exitCode=0 Mar 20 08:49:50.221651 master-0 kubenswrapper[27820]: I0320 08:49:50.221612 27820 generic.go:334] "Generic (PLEG): container finished" podID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerID="11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe" exitCode=0 Mar 20 08:49:50.226964 master-0 kubenswrapper[27820]: I0320 08:49:50.226915 27820 generic.go:334] "Generic (PLEG): container finished" podID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerID="5e7daf3466466f866a8a609c3357214ad22e67b72e11f87494389948c897e7d2" exitCode=0 Mar 20 08:49:50.229823 master-0 kubenswrapper[27820]: I0320 08:49:50.229799 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e" exitCode=0 Mar 20 08:49:50.232556 master-0 kubenswrapper[27820]: I0320 08:49:50.232534 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-5c6485487f-897zl_14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc/machine-approver-controller/0.log" Mar 20 08:49:50.232877 master-0 kubenswrapper[27820]: I0320 08:49:50.232846 27820 generic.go:334] "Generic (PLEG): container finished" podID="14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc" containerID="45ed4d229b5e4ffda3ad9ee3a6c6c79dd79e664c69394337cb1c4fc4b2036f31" exitCode=255 Mar 20 08:49:50.234443 master-0 kubenswrapper[27820]: I0320 08:49:50.234423 27820 generic.go:334] "Generic (PLEG): container finished" podID="acbaba45-12d9-40b9-818c-4b091d7929b1" containerID="6a9d899a8eb10974cc6c4342f48d72d6fc952b94defbc645eeeae9b0a3d84f6a" exitCode=0 Mar 20 08:49:50.237746 master-0 kubenswrapper[27820]: I0320 08:49:50.237725 27820 generic.go:334] "Generic (PLEG): container finished" podID="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" containerID="0f65346a38596f758067a95721b4b8d598991f6450f547c5688592057337ba23" exitCode=0 Mar 20 08:49:50.237746 master-0 kubenswrapper[27820]: I0320 08:49:50.237745 27820 generic.go:334] "Generic (PLEG): container finished" podID="b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc" containerID="31eea8f8908cce83a9e43c16d0440c72175117897d7cc72e9c66a228fb48965a" exitCode=0 Mar 20 08:49:50.239352 master-0 kubenswrapper[27820]: I0320 08:49:50.239325 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8c94f4649-w8c24_61ab4d32-c732-4be5-aa85-a2e1dd21cb60/openshift-controller-manager-operator/1.log" Mar 20 08:49:50.239395 master-0 kubenswrapper[27820]: I0320 08:49:50.239361 27820 generic.go:334] "Generic (PLEG): container finished" podID="61ab4d32-c732-4be5-aa85-a2e1dd21cb60" containerID="574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af" exitCode=255 Mar 20 08:49:50.241701 master-0 kubenswrapper[27820]: I0320 08:49:50.241672 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/0.log" Mar 20 08:49:50.242216 master-0 kubenswrapper[27820]: I0320 08:49:50.242186 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/cluster-cloud-controller-manager/0.log" Mar 20 08:49:50.242294 master-0 kubenswrapper[27820]: I0320 08:49:50.242239 27820 generic.go:334] "Generic (PLEG): container finished" podID="6163bd4b-dc83-4e83-8590-5ac4753bda1c" containerID="8318704eaa08899e772deabe42128ea1b882f7234facbd87ca64f6d3f0952a1a" exitCode=1 Mar 20 08:49:50.242294 master-0 kubenswrapper[27820]: I0320 08:49:50.242260 27820 generic.go:334] "Generic (PLEG): container finished" podID="6163bd4b-dc83-4e83-8590-5ac4753bda1c" containerID="52016baf23be09eb560f695ee764aa3c366d61ff1792a482aac5922ed083323d" exitCode=1 Mar 20 08:49:50.244585 master-0 kubenswrapper[27820]: I0320 08:49:50.244551 27820 generic.go:334] "Generic (PLEG): container finished" podID="09a5682c-4f13-4b8c-8179-3e6dfa8f98db" containerID="38ba09231d63afd93a0205a5845a80e4d47fa8290768d886cf1c7ea448f682d8" exitCode=0 Mar 20 08:49:50.246143 master-0 kubenswrapper[27820]: I0320 08:49:50.246112 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_92600726-933f-41eb-a329-1fcc68dc95c1/installer/0.log" Mar 20 08:49:50.246186 master-0 kubenswrapper[27820]: I0320 08:49:50.246158 27820 generic.go:334] "Generic (PLEG): container finished" podID="92600726-933f-41eb-a329-1fcc68dc95c1" containerID="37909af3090055d773495c88ec18992da7d8fea5935c4a6afb5893aaa0a777f4" exitCode=1 Mar 20 08:49:50.248085 master-0 kubenswrapper[27820]: I0320 08:49:50.248067 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7bd846bfc4-x4w25_9817d1ec-3d7c-49fb-8e41-26f5727ef9e8/network-operator/0.log" Mar 20 08:49:50.248127 master-0 kubenswrapper[27820]: I0320 08:49:50.248097 27820 generic.go:334] "Generic (PLEG): container finished" podID="9817d1ec-3d7c-49fb-8e41-26f5727ef9e8" containerID="51d5ff19316ba50d65a137d07edaf8d44d3c66d7ea87669b610c77e6e7a5026d" exitCode=255 Mar 20 08:49:50.252552 master-0 kubenswrapper[27820]: I0320 08:49:50.252345 27820 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="3102a41a904a505c496c9e6ff056d38d7935cf53ed7153f14bbd8b5057d5541a" exitCode=0 Mar 20 08:49:50.252552 master-0 kubenswrapper[27820]: I0320 08:49:50.252372 27820 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="d88e93757522ae39b5517291f3c06f1dd6bd6427800d2bd825b8a5c55305f18d" exitCode=0 Mar 20 08:49:50.252552 master-0 kubenswrapper[27820]: I0320 08:49:50.252382 27820 generic.go:334] "Generic (PLEG): container finished" podID="094204df314fe45bd5af12ca1b4622bb" containerID="89681a264aa64084b2aa38ba642cb89ce6a4bb719fa716689bf3853f8249b887" exitCode=0 Mar 20 08:49:50.257297 master-0 kubenswrapper[27820]: I0320 08:49:50.257266 27820 generic.go:334] "Generic (PLEG): container finished" podID="2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c" containerID="eb83d7b52ee34a208a7d7d8320582445204a3a3c9a564d3c4ad584270b43c58c" exitCode=0 Mar 20 08:49:50.260530 master-0 kubenswrapper[27820]: I0320 08:49:50.260472 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-config-operator/2.log" Mar 20 08:49:50.260913 master-0 kubenswrapper[27820]: I0320 08:49:50.260883 27820 generic.go:334] "Generic (PLEG): container finished" podID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerID="1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f" exitCode=255 Mar 20 08:49:50.260913 master-0 kubenswrapper[27820]: I0320 08:49:50.260906 27820 generic.go:334] "Generic (PLEG): container finished" podID="3065e4b4-4493-41ce-b9d2-89315475f74f" containerID="79d1a04f16780a30204d3fb5aa6261f513e7c954544e8ecbd91d389cc77dbe03" exitCode=0 Mar 20 08:49:50.262314 master-0 kubenswrapper[27820]: I0320 08:49:50.262259 27820 generic.go:334] "Generic (PLEG): container finished" podID="1746482a-d1a3-4eac-8bc9-643b6af75163" containerID="a1b4eceb0f2328786d0d5d45adc257b068090b4a532ca9b2a6eb0db19b8abba4" exitCode=0 Mar 20 08:49:50.265414 master-0 kubenswrapper[27820]: I0320 08:49:50.265357 27820 generic.go:334] "Generic (PLEG): container finished" podID="8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072" containerID="cbb3f75129c9d64cd795c59facd72277d5aa4e6c03360f86cd3b579cb2e915c3" exitCode=0 Mar 20 08:49:50.270641 master-0 kubenswrapper[27820]: I0320 08:49:50.270611 27820 generic.go:334] "Generic (PLEG): container finished" podID="123f1ecb-cc03-462b-b76f-7251bf69d3d6" containerID="96058a0b48f5954e1e280e02b2139f100552b410ebee73d3b0fd6e4aa44bd764" exitCode=0 Mar 20 08:49:50.275495 master-0 kubenswrapper[27820]: I0320 08:49:50.275452 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-dq29v_9d653bfa-7168-49fa-a838-aedb33c7e60f/approver/1.log" Mar 20 08:49:50.275888 master-0 kubenswrapper[27820]: I0320 08:49:50.275843 27820 generic.go:334] "Generic (PLEG): container finished" podID="9d653bfa-7168-49fa-a838-aedb33c7e60f" containerID="4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4" exitCode=1 Mar 20 08:49:50.286456 master-0 kubenswrapper[27820]: I0320 08:49:50.286404 27820 generic.go:334] "Generic (PLEG): container finished" podID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerID="c35a5738f2f9a6fb340b75e09b70d5c9961a967d646e1417a2634fd74ebeb167" exitCode=0 Mar 20 08:49:50.288257 master-0 kubenswrapper[27820]: I0320 08:49:50.288214 27820 generic.go:334] "Generic (PLEG): container finished" podID="2faf85a2-29bb-4275-a12b-0ef1663a4f0d" containerID="9b538e53e002b24081578246c7d675b101b228304a8e87c5077457c1455c343d" exitCode=0 Mar 20 08:49:50.290224 master-0 kubenswrapper[27820]: I0320 08:49:50.290193 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/0.log" Mar 20 08:49:50.290638 master-0 kubenswrapper[27820]: I0320 08:49:50.290594 27820 generic.go:334] "Generic (PLEG): container finished" podID="80ddf0a4-e853-4de0-b540-81144dfdd31d" containerID="462a8070d6a4a84bd5f75252bfbebd1aeca669c870c803cc819af47a7fc47625" exitCode=255 Mar 20 08:49:50.292749 master-0 kubenswrapper[27820]: I0320 08:49:50.292722 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/0.log" Mar 20 08:49:50.292749 master-0 kubenswrapper[27820]: I0320 08:49:50.292758 27820 generic.go:334] "Generic (PLEG): container finished" podID="a86af6a2-55a9-4c4e-8caf-1f51fedb23f5" containerID="3033684921b500c0cdc5a887bfabd7fc5e3c9f8cea2dfed120b0981d20756634" exitCode=1 Mar 20 08:49:50.296033 master-0 kubenswrapper[27820]: I0320 08:49:50.296003 27820 generic.go:334] "Generic (PLEG): container finished" podID="e9425526-9f51-4302-a19d-a8107f56c582" containerID="ede2ef38ba8d0fe732989d57db50e82ff2ef33b1e7f1869b8d140d9c93969650" exitCode=0 Mar 20 08:49:50.296033 master-0 kubenswrapper[27820]: I0320 08:49:50.296026 27820 generic.go:334] "Generic (PLEG): container finished" podID="e9425526-9f51-4302-a19d-a8107f56c582" containerID="7eace203ad1dd45a1a683d8c3e7772a2d39b397eee68cf9a1c7862a15d7b007d" exitCode=0 Mar 20 08:49:50.296122 master-0 kubenswrapper[27820]: I0320 08:49:50.296036 27820 generic.go:334] "Generic (PLEG): container finished" podID="e9425526-9f51-4302-a19d-a8107f56c582" containerID="903bd12c687f6625987bd7d1e46b200fb44d1a9e193c70ad2441cab58febeed2" exitCode=0 Mar 20 08:49:50.297992 master-0 kubenswrapper[27820]: I0320 08:49:50.297962 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_1249822f86f23526277d165c0d5d3c19/kube-rbac-proxy-crio/2.log" Mar 20 08:49:50.298364 master-0 kubenswrapper[27820]: I0320 08:49:50.298304 27820 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf" exitCode=1 Mar 20 08:49:50.298735 master-0 kubenswrapper[27820]: I0320 08:49:50.298363 27820 generic.go:334] "Generic (PLEG): container finished" podID="1249822f86f23526277d165c0d5d3c19" containerID="f46237550fb6588ccbb218d4b52be58120b3dd1d98e107a7ca8477306baad5dd" exitCode=0 Mar 20 08:49:50.375380 master-0 kubenswrapper[27820]: E0320 08:49:50.374569 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:49:50.774803 master-0 kubenswrapper[27820]: E0320 08:49:50.774727 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:49:51.010508 master-0 kubenswrapper[27820]: I0320 08:49:51.010423 27820 apiserver.go:52] "Watching apiserver" Mar 20 08:49:51.039387 master-0 kubenswrapper[27820]: I0320 08:49:51.039094 27820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 08:49:51.575471 master-0 kubenswrapper[27820]: E0320 08:49:51.575363 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:49:53.176519 master-0 kubenswrapper[27820]: E0320 08:49:53.176443 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:49:56.377421 master-0 kubenswrapper[27820]: E0320 08:49:56.377344 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:50:01.377875 master-0 kubenswrapper[27820]: E0320 08:50:01.377786 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:50:06.378687 master-0 kubenswrapper[27820]: E0320 08:50:06.378620 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:50:08.858314 master-0 kubenswrapper[27820]: E0320 08:50:08.856999 27820 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/system.slice/crio.service\": failed to get container info for \"/system.slice/crio.service\": unknown container \"/system.slice/crio.service\"" containerName="/system.slice/crio.service" Mar 20 08:50:08.859347 master-0 kubenswrapper[27820]: E0320 08:50:08.858392 27820 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods.slice\": failed to get container info for \"/kubepods.slice\": unknown container \"/kubepods.slice\"" containerName="/kubepods.slice" Mar 20 08:50:11.379656 master-0 kubenswrapper[27820]: E0320 08:50:11.379581 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:50:16.380224 master-0 kubenswrapper[27820]: E0320 08:50:16.380167 27820 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 20 08:50:20.186351 master-0 kubenswrapper[27820]: E0320 08:50:20.186216 27820 summary_sys_containers.go:89] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods.slice\": failed to get container info for \"/kubepods.slice\": unknown container \"/kubepods.slice\"" containerName="/kubepods.slice" Mar 20 08:50:20.439900 master-0 kubenswrapper[27820]: I0320 08:50:20.439814 27820 manager.go:324] Recovery completed Mar 20 08:50:20.546000 master-0 kubenswrapper[27820]: I0320 08:50:20.545938 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 20 08:50:20.548230 master-0 kubenswrapper[27820]: I0320 08:50:20.548176 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="4af0d14e2080acfab9b4be1c21f5c397bb2b57510a3ab1d14b3ae883125de902" exitCode=255 Mar 20 08:50:20.554635 master-0 kubenswrapper[27820]: I0320 08:50:20.554582 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-vhrdf_74bebf0b-6727-4959-8239-a9389e630524/multus-admission-controller/0.log" Mar 20 08:50:20.554895 master-0 kubenswrapper[27820]: I0320 08:50:20.554854 27820 generic.go:334] "Generic (PLEG): container finished" podID="74bebf0b-6727-4959-8239-a9389e630524" containerID="46a769eaa885d6f2aee7986a052f5cb914f5503a0051214e8b4e113fe0f1651a" exitCode=137 Mar 20 08:50:20.555399 master-0 kubenswrapper[27820]: I0320 08:50:20.555339 27820 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 20 08:50:20.555399 master-0 kubenswrapper[27820]: I0320 08:50:20.555382 27820 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 20 08:50:20.555592 master-0 kubenswrapper[27820]: I0320 08:50:20.555427 27820 state_mem.go:36] "Initialized new in-memory state store" Mar 20 08:50:20.555744 master-0 kubenswrapper[27820]: I0320 08:50:20.555700 27820 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 20 08:50:20.555828 master-0 kubenswrapper[27820]: I0320 08:50:20.555731 27820 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 20 08:50:20.555828 master-0 kubenswrapper[27820]: I0320 08:50:20.555765 27820 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 20 08:50:20.555828 master-0 kubenswrapper[27820]: I0320 08:50:20.555775 27820 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 20 08:50:20.555828 master-0 kubenswrapper[27820]: I0320 08:50:20.555784 27820 policy_none.go:49] "None policy: Start" Mar 20 08:50:20.558454 master-0 kubenswrapper[27820]: I0320 08:50:20.558391 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xj8x6_45b3c788-eb83-448a-bc60-90b8ace28382/kube-multus-additional-cni-plugins/0.log" Mar 20 08:50:20.558577 master-0 kubenswrapper[27820]: I0320 08:50:20.558465 27820 generic.go:334] "Generic (PLEG): container finished" podID="45b3c788-eb83-448a-bc60-90b8ace28382" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" exitCode=137 Mar 20 08:50:20.560975 master-0 kubenswrapper[27820]: I0320 08:50:20.560918 27820 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 08:50:20.560975 master-0 kubenswrapper[27820]: I0320 08:50:20.560978 27820 state_mem.go:35] "Initializing new in-memory state store" Mar 20 08:50:20.561658 master-0 kubenswrapper[27820]: I0320 08:50:20.561611 27820 generic.go:334] "Generic (PLEG): container finished" podID="e89571b2-098c-495b-9b53-c4ebd95296ab" containerID="d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c" exitCode=0 Mar 20 08:50:20.561752 master-0 kubenswrapper[27820]: I0320 08:50:20.561723 27820 state_mem.go:75] "Updated machine memory state" Mar 20 08:50:20.561752 master-0 kubenswrapper[27820]: I0320 08:50:20.561742 27820 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 20 08:50:20.576786 master-0 kubenswrapper[27820]: I0320 08:50:20.576711 27820 manager.go:334] "Starting Device Plugin manager" Mar 20 08:50:20.576786 master-0 kubenswrapper[27820]: I0320 08:50:20.576763 27820 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 08:50:20.576786 master-0 kubenswrapper[27820]: I0320 08:50:20.576777 27820 server.go:79] "Starting device plugin registration server" Mar 20 08:50:20.577299 master-0 kubenswrapper[27820]: I0320 08:50:20.577249 27820 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 08:50:20.577403 master-0 kubenswrapper[27820]: I0320 08:50:20.577288 27820 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 08:50:20.577481 master-0 kubenswrapper[27820]: I0320 08:50:20.577470 27820 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 08:50:20.577595 master-0 kubenswrapper[27820]: I0320 08:50:20.577555 27820 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 08:50:20.577595 master-0 kubenswrapper[27820]: I0320 08:50:20.577571 27820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 08:50:20.677543 master-0 kubenswrapper[27820]: I0320 08:50:20.677471 27820 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 08:50:20.681649 master-0 kubenswrapper[27820]: I0320 08:50:20.681474 27820 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 20 08:50:20.682474 master-0 kubenswrapper[27820]: I0320 08:50:20.681804 27820 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 20 08:50:20.682474 master-0 kubenswrapper[27820]: I0320 08:50:20.681830 27820 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 20 08:50:20.682474 master-0 kubenswrapper[27820]: I0320 08:50:20.681989 27820 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 20 08:50:20.698570 master-0 kubenswrapper[27820]: I0320 08:50:20.698470 27820 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 20 08:50:20.698721 master-0 kubenswrapper[27820]: I0320 08:50:20.698614 27820 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 20 08:50:21.381718 master-0 kubenswrapper[27820]: I0320 08:50:21.381591 27820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0"] Mar 20 08:50:21.382425 master-0 kubenswrapper[27820]: I0320 08:50:21.382250 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-9c5679d8f-xfns6","openshift-network-operator/iptables-alerter-9xlf2","openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9","openshift-cluster-node-tuning-operator/tuned-zgm52","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-scheduler/installer-4-retry-1-master-0","openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm","openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6","openshift-machine-config-operator/machine-config-daemon-lxv4d","openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf","assisted-installer/assisted-installer-controller-j6hxl","openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr","openshift-marketplace/marketplace-operator-89ccd998f-j84r8","openshift-network-diagnostics/network-check-target-j9jjm","openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98","openshift-dns/node-resolver-j7ngf","openshift-ingress-canary/ingress-canary-vzrlt","openshift-machine-config-operator/machine-config-server-6bd59","openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg","openshift-multus/network-metrics-daemon-nfrth","openshift-etcd/installer-2-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v","openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg","openshift-monitoring/node-exporter-rzg98","openshift-ingress/router-default-7dcf5569b5-kvmtp","openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t","openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv","openshift-etcd/installer-1-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-multus/multus-pxqwj","openshift-service-ca/service-ca-79bc6b8d76-trbxh","openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq","openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp","openshift-kube-controller-manager/installer-3-retry-1-master-0","openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb","openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr","openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk","openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc","openshift-kube-apiserver/installer-1-master-0","openshift-kube-apiserver/installer-3-master-0","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt","openshift-marketplace/redhat-operators-bt7wn","openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk","openshift-cluster-version/cluster-version-operator-7d58488df-bzstx","openshift-kube-scheduler/installer-3-master-0","openshift-kube-scheduler/installer-4-master-0","openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p","openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2","openshift-monitoring/kube-state-metrics-7bbc969446-28l2x","openshift-oauth-apiserver/apiserver-5595498c49-hrfrr","openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl","openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d","openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n","openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq","openshift-kube-controller-manager/installer-2-master-0","openshift-marketplace/redhat-marketplace-hj5tl","openshift-network-node-identity/network-node-identity-dq29v","openshift-apiserver/apiserver-64b65cddf5-gx7h7","openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6","openshift-monitoring/metrics-server-55d84d7794-56n4c","openshift-kube-controller-manager/installer-3-master-0","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6","openshift-ingress-operator/ingress-operator-66b84d69b-dknxr","openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/cni-sysctl-allowlist-ds-xj8x6","openshift-network-operator/network-operator-7bd846bfc4-x4w25","openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl","openshift-dns/dns-default-gskz6","openshift-etcd/etcd-master-0","openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm","openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg","openshift-multus/multus-additional-cni-plugins-x7vrg","openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24","openshift-controller-manager/controller-manager-bc85986b9-8p79x","openshift-insights/insights-operator-68bf6ff9d6-c7zf4","openshift-marketplace/community-operators-chfj7","openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd","openshift-ovn-kubernetes/ovnkube-node-bvndl","openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc","openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl","openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67","openshift-kube-apiserver/installer-2-master-0","openshift-marketplace/certified-operators-clrp2","openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9","openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd","openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj"] Mar 20 08:50:21.382816 master-0 kubenswrapper[27820]: I0320 08:50:21.382601 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.382933 master-0 kubenswrapper[27820]: I0320 08:50:21.382899 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-j6hxl" Mar 20 08:50:21.389562 master-0 kubenswrapper[27820]: I0320 08:50:21.389500 27820 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="74e882e7-7513-46fa-a2e4-567779c5e860" Mar 20 08:50:21.397555 master-0 kubenswrapper[27820]: I0320 08:50:21.396146 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 20 08:50:21.398551 master-0 kubenswrapper[27820]: I0320 08:50:21.398523 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 20 08:50:21.399210 master-0 kubenswrapper[27820]: I0320 08:50:21.399189 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 20 08:50:21.399429 master-0 kubenswrapper[27820]: I0320 08:50:21.399365 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 20 08:50:21.399763 master-0 kubenswrapper[27820]: I0320 08:50:21.399748 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 20 08:50:21.399885 master-0 kubenswrapper[27820]: I0320 08:50:21.399827 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 08:50:21.400234 master-0 kubenswrapper[27820]: I0320 08:50:21.400170 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 08:50:21.400369 master-0 kubenswrapper[27820]: I0320 08:50:21.400340 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 08:50:21.400509 master-0 kubenswrapper[27820]: I0320 08:50:21.400486 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 20 08:50:21.400609 master-0 kubenswrapper[27820]: I0320 08:50:21.400588 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 20 08:50:21.400977 master-0 kubenswrapper[27820]: I0320 08:50:21.400951 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 08:50:21.401504 master-0 kubenswrapper[27820]: I0320 08:50:21.401462 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 20 08:50:21.401578 master-0 kubenswrapper[27820]: I0320 08:50:21.401526 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.401578 master-0 kubenswrapper[27820]: I0320 08:50:21.401571 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 08:50:21.401715 master-0 kubenswrapper[27820]: I0320 08:50:21.401694 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 08:50:21.401763 master-0 kubenswrapper[27820]: I0320 08:50:21.401712 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 20 08:50:21.401814 master-0 kubenswrapper[27820]: I0320 08:50:21.401796 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 08:50:21.401922 master-0 kubenswrapper[27820]: I0320 08:50:21.401892 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 20 08:50:21.401965 master-0 kubenswrapper[27820]: I0320 08:50:21.401926 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 08:50:21.402051 master-0 kubenswrapper[27820]: I0320 08:50:21.402014 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 08:50:21.402096 master-0 kubenswrapper[27820]: I0320 08:50:21.402078 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.402128 master-0 kubenswrapper[27820]: I0320 08:50:21.402113 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 20 08:50:21.402210 master-0 kubenswrapper[27820]: I0320 08:50:21.401535 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 08:50:21.402377 master-0 kubenswrapper[27820]: I0320 08:50:21.402187 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 20 08:50:21.402662 master-0 kubenswrapper[27820]: I0320 08:50:21.402630 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 08:50:21.402945 master-0 kubenswrapper[27820]: I0320 08:50:21.402912 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 20 08:50:21.403387 master-0 kubenswrapper[27820]: I0320 08:50:21.403363 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 08:50:21.403494 master-0 kubenswrapper[27820]: I0320 08:50:21.403473 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 08:50:21.403622 master-0 kubenswrapper[27820]: I0320 08:50:21.403603 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 08:50:21.404016 master-0 kubenswrapper[27820]: I0320 08:50:21.403940 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 08:50:21.404308 master-0 kubenswrapper[27820]: I0320 08:50:21.404247 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 08:50:21.408051 master-0 kubenswrapper[27820]: I0320 08:50:21.404704 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 08:50:21.408051 master-0 kubenswrapper[27820]: I0320 08:50:21.405107 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 08:50:21.408051 master-0 kubenswrapper[27820]: I0320 08:50:21.405676 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 08:50:21.408051 master-0 kubenswrapper[27820]: I0320 08:50:21.400254 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.408236 master-0 kubenswrapper[27820]: I0320 08:50:21.404494 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:50:21.415762 master-0 kubenswrapper[27820]: I0320 08:50:21.415697 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.436840 master-0 kubenswrapper[27820]: I0320 08:50:21.436749 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 08:50:21.439761 master-0 kubenswrapper[27820]: I0320 08:50:21.439704 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 08:50:21.440217 master-0 kubenswrapper[27820]: I0320 08:50:21.439968 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 08:50:21.440411 master-0 kubenswrapper[27820]: I0320 08:50:21.440385 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.440640 master-0 kubenswrapper[27820]: I0320 08:50:21.440604 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.444543 master-0 kubenswrapper[27820]: I0320 08:50:21.444510 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 20 08:50:21.444944 master-0 kubenswrapper[27820]: I0320 08:50:21.444921 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.445027 master-0 kubenswrapper[27820]: I0320 08:50:21.444993 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 08:50:21.445190 master-0 kubenswrapper[27820]: I0320 08:50:21.445168 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.445342 master-0 kubenswrapper[27820]: I0320 08:50:21.445321 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 08:50:21.445451 master-0 kubenswrapper[27820]: I0320 08:50:21.445435 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 08:50:21.445485 master-0 kubenswrapper[27820]: I0320 08:50:21.445452 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 08:50:21.445516 master-0 kubenswrapper[27820]: I0320 08:50:21.445462 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 08:50:21.445572 master-0 kubenswrapper[27820]: I0320 08:50:21.445556 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 08:50:21.445689 master-0 kubenswrapper[27820]: I0320 08:50:21.445672 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.445723 master-0 kubenswrapper[27820]: I0320 08:50:21.445700 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 08:50:21.445897 master-0 kubenswrapper[27820]: I0320 08:50:21.445864 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 08:50:21.445897 master-0 kubenswrapper[27820]: I0320 08:50:21.445883 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 08:50:21.446005 master-0 kubenswrapper[27820]: I0320 08:50:21.445987 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:50:21.446217 master-0 kubenswrapper[27820]: I0320 08:50:21.446195 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 08:50:21.446281 master-0 kubenswrapper[27820]: I0320 08:50:21.446239 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 08:50:21.446329 master-0 kubenswrapper[27820]: I0320 08:50:21.446319 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 08:50:21.446393 master-0 kubenswrapper[27820]: I0320 08:50:21.446373 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 08:50:21.446431 master-0 kubenswrapper[27820]: I0320 08:50:21.446410 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 08:50:21.446522 master-0 kubenswrapper[27820]: I0320 08:50:21.446490 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 08:50:21.446588 master-0 kubenswrapper[27820]: I0320 08:50:21.446568 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 08:50:21.446750 master-0 kubenswrapper[27820]: I0320 08:50:21.446733 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 08:50:21.446886 master-0 kubenswrapper[27820]: I0320 08:50:21.446867 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 08:50:21.446953 master-0 kubenswrapper[27820]: I0320 08:50:21.446927 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 08:50:21.446993 master-0 kubenswrapper[27820]: I0320 08:50:21.446963 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 08:50:21.446993 master-0 kubenswrapper[27820]: I0320 08:50:21.446979 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 08:50:21.447053 master-0 kubenswrapper[27820]: I0320 08:50:21.447024 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 08:50:21.447084 master-0 kubenswrapper[27820]: I0320 08:50:21.447076 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 08:50:21.447114 master-0 kubenswrapper[27820]: I0320 08:50:21.447081 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.447114 master-0 kubenswrapper[27820]: I0320 08:50:21.447107 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 08:50:21.447203 master-0 kubenswrapper[27820]: I0320 08:50:21.447183 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 08:50:21.447203 master-0 kubenswrapper[27820]: I0320 08:50:21.447196 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:50:21.447289 master-0 kubenswrapper[27820]: I0320 08:50:21.447213 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 08:50:21.447289 master-0 kubenswrapper[27820]: I0320 08:50:21.447189 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 08:50:21.447367 master-0 kubenswrapper[27820]: I0320 08:50:21.447194 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 08:50:21.447435 master-0 kubenswrapper[27820]: I0320 08:50:21.447420 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 08:50:21.447624 master-0 kubenswrapper[27820]: I0320 08:50:21.447585 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.447674 master-0 kubenswrapper[27820]: I0320 08:50:21.447610 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 08:50:21.447674 master-0 kubenswrapper[27820]: I0320 08:50:21.447649 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 08:50:21.447798 master-0 kubenswrapper[27820]: I0320 08:50:21.447439 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.447866 master-0 kubenswrapper[27820]: I0320 08:50:21.447517 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 20 08:50:21.447928 master-0 kubenswrapper[27820]: I0320 08:50:21.447902 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 20 08:50:21.447969 master-0 kubenswrapper[27820]: I0320 08:50:21.447520 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 08:50:21.448033 master-0 kubenswrapper[27820]: I0320 08:50:21.447520 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 08:50:21.448033 master-0 kubenswrapper[27820]: I0320 08:50:21.447566 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 20 08:50:21.448510 master-0 kubenswrapper[27820]: I0320 08:50:21.448483 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 08:50:21.448561 master-0 kubenswrapper[27820]: I0320 08:50:21.448544 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 08:50:21.448706 master-0 kubenswrapper[27820]: I0320 08:50:21.448684 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 08:50:21.448872 master-0 kubenswrapper[27820]: E0320 08:50:21.448831 27820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.449091 master-0 kubenswrapper[27820]: I0320 08:50:21.449068 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 20 08:50:21.449275 master-0 kubenswrapper[27820]: I0320 08:50:21.449238 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:50:21.449450 master-0 kubenswrapper[27820]: I0320 08:50:21.449430 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 08:50:21.449523 master-0 kubenswrapper[27820]: I0320 08:50:21.449084 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 08:50:21.449584 master-0 kubenswrapper[27820]: I0320 08:50:21.449506 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 08:50:21.449780 master-0 kubenswrapper[27820]: E0320 08:50:21.449758 27820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.450837 master-0 kubenswrapper[27820]: I0320 08:50:21.450807 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 08:50:21.452157 master-0 kubenswrapper[27820]: I0320 08:50:21.452126 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 20 08:50:21.452379 master-0 kubenswrapper[27820]: I0320 08:50:21.452328 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 20 08:50:21.452634 master-0 kubenswrapper[27820]: I0320 08:50:21.452611 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 20 08:50:21.452878 master-0 kubenswrapper[27820]: I0320 08:50:21.452848 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 20 08:50:21.453380 master-0 kubenswrapper[27820]: I0320 08:50:21.453355 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 08:50:21.455427 master-0 kubenswrapper[27820]: I0320 08:50:21.455393 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.455516 master-0 kubenswrapper[27820]: I0320 08:50:21.455498 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" Mar 20 08:50:21.455786 master-0 kubenswrapper[27820]: I0320 08:50:21.455757 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.456078 master-0 kubenswrapper[27820]: I0320 08:50:21.456014 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"26923e70-56a5-4020-8b55-510879ec6fd4","Type":"ContainerDied","Data":"c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc"} Mar 20 08:50:21.456119 master-0 kubenswrapper[27820]: I0320 08:50:21.456081 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c78227a4a3db86dc69334917f189dbfb156f17531ec0c958d73bd5cb930242bc" Mar 20 08:50:21.456119 master-0 kubenswrapper[27820]: I0320 08:50:21.456106 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 20 08:50:21.456178 master-0 kubenswrapper[27820]: I0320 08:50:21.456133 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" event={"ID":"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047","Type":"ContainerStarted","Data":"6d7e46e102c5d86e3216541277d0f646eb01b68f76beed85bd56c65d91b3c2bc"} Mar 20 08:50:21.456178 master-0 kubenswrapper[27820]: I0320 08:50:21.456150 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:50:21.456178 master-0 kubenswrapper[27820]: I0320 08:50:21.456165 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" event={"ID":"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047","Type":"ContainerStarted","Data":"14986cdcb6c65fcca4be3c338e4a013796b08052ed9fdf5beaaa06246a8fc6be"} Mar 20 08:50:21.456281 master-0 kubenswrapper[27820]: I0320 08:50:21.456177 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerDied","Data":"3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2"} Mar 20 08:50:21.456281 master-0 kubenswrapper[27820]: I0320 08:50:21.456198 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"bb58137a9975e369b5d22af63557b19c9ebf89c5b57408a1ff77493bf0b71c97"} Mar 20 08:50:21.456281 master-0 kubenswrapper[27820]: I0320 08:50:21.456209 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"e19e3ca7f7f87202999ccf51b5e641a2b701234ac17e2a8733f102ed0960e44b"} Mar 20 08:50:21.456372 master-0 kubenswrapper[27820]: I0320 08:50:21.456313 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" event={"ID":"6d26f719-43b9-4c1c-9a54-ff800177db68","Type":"ContainerStarted","Data":"1016c20b30300a724092253f38d19d884841e5634e7a9695b858976d92da0845"} Mar 20 08:50:21.456372 master-0 kubenswrapper[27820]: I0320 08:50:21.456339 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" event={"ID":"6d26f719-43b9-4c1c-9a54-ff800177db68","Type":"ContainerStarted","Data":"702713f2f96146013bc9672b7b029fe7154bd722d3f9153e565a46fd2b9a50ba"} Mar 20 08:50:21.456372 master-0 kubenswrapper[27820]: I0320 08:50:21.456356 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerStarted","Data":"a4506cf0f6e726afbe8cf8c9e90673480cf1d2ed376fa06f37ff1cc988603b59"} Mar 20 08:50:21.456458 master-0 kubenswrapper[27820]: I0320 08:50:21.456372 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerDied","Data":"f61b725a79fff556468b0126e41778d167b8a31ec8526a9c664ab434b3c33c45"} Mar 20 08:50:21.456458 master-0 kubenswrapper[27820]: I0320 08:50:21.456390 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" event={"ID":"71ca96e8-5108-455c-bb3c-17977d38e912","Type":"ContainerStarted","Data":"4767ac5e1fdc3320e004401bc470473fa3834d94268bcd37051a5ed0f54f6980"} Mar 20 08:50:21.456458 master-0 kubenswrapper[27820]: I0320 08:50:21.456407 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"87bb88b58dcaa56043bab79cbae67bed022b306a8dc237363f63444aab0218d1"} Mar 20 08:50:21.456458 master-0 kubenswrapper[27820]: I0320 08:50:21.456426 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerDied","Data":"80eb123c688aa3fa3410485be400247180c54ec6ea64ffab5e44c11edb58320f"} Mar 20 08:50:21.456458 master-0 kubenswrapper[27820]: I0320 08:50:21.456440 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"a4954a5504413e2099df95d5fe0152972b5d1c0a055f8c70067df9606aba177c"} Mar 20 08:50:21.456458 master-0 kubenswrapper[27820]: I0320 08:50:21.456452 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" event={"ID":"210dd7f0-d1c0-407a-b89b-f11ef605e5df","Type":"ContainerStarted","Data":"2eb15c3da7104afd61e8e0a9cecb48e57f16366430abff29d1fcba72d53fd3a2"} Mar 20 08:50:21.456682 master-0 kubenswrapper[27820]: I0320 08:50:21.456463 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"8d518eacb100580b01b9095670b6acba2810ec52aaec3061b31829e5e84f61cd"} Mar 20 08:50:21.456682 master-0 kubenswrapper[27820]: I0320 08:50:21.456478 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"fef2a33ba0f77ba9f48caf8a72fb3567bbb02f7cff7f70d80acb4acac86e7062"} Mar 20 08:50:21.456682 master-0 kubenswrapper[27820]: I0320 08:50:21.456485 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456491 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"75ee30752038facd89d76f05a1b5b8d9abb32492d825eaef487a4eb2de3b955c"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456742 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" event={"ID":"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0","Type":"ContainerStarted","Data":"64ca7ad287a18077a9681b1e546ec20fe155067ef4ae153360b9f6ad5ecbcb02"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456769 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" event={"ID":"ff2dfe9d-2834-43cb-b093-0831b2b87131","Type":"ContainerStarted","Data":"916ee61bba3dc6046a4302aa344d164e5c62669611430d505dfe331ff6648b85"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456785 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" event={"ID":"ff2dfe9d-2834-43cb-b093-0831b2b87131","Type":"ContainerStarted","Data":"34116c989e6200289799cf6a068e33d84cdd4a6aebaa76c424e05c0548acfce2"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456802 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" event={"ID":"ff2dfe9d-2834-43cb-b093-0831b2b87131","Type":"ContainerStarted","Data":"cf10038472bbf516505fe96b60deacd7fa47b423ffbd5ce932f981e42d79741e"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456815 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" event={"ID":"04466971-127b-403e-af45-dad97b6e0c87","Type":"ContainerStarted","Data":"b842607819c12e2c961d4115971433a287618e424b9b5e836fdeed85d90e9244"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456825 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" event={"ID":"04466971-127b-403e-af45-dad97b6e0c87","Type":"ContainerStarted","Data":"46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456835 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" event={"ID":"0cb6d987-4b59-4fd9-889a-3250c12a726c","Type":"ContainerStarted","Data":"3da2656475f5983818e1475566996590539eef1a03ecaa67e3e41912939fad03"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456854 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" event={"ID":"0cb6d987-4b59-4fd9-889a-3250c12a726c","Type":"ContainerStarted","Data":"c36c31fbbcf87c5d54cc8e014278bdb215440e9d5e4a9526984baeadd5fbfa6f"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456866 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"521086da-d513-4475-8db5-098ab9838df1","Type":"ContainerDied","Data":"35c674a122271104b677e9d9fd6224e868e82108125b554a6b281e82916a6b0b"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456878 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-retry-1-master-0" event={"ID":"521086da-d513-4475-8db5-098ab9838df1","Type":"ContainerDied","Data":"dafa7bfa1891cfd7726eb94b085308d784cb5068654283dc7ca015d37e624b07"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456890 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dafa7bfa1891cfd7726eb94b085308d784cb5068654283dc7ca015d37e624b07" Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456900 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerStarted","Data":"ef990221492ae46a8cd1a26b64819364b3aa46187fde095a3bf3a78349aaa22f"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456912 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerDied","Data":"c390ada5286d8adbcd2f8c4da2b3fb1c764bd2a56eb30ce5a1fc2fc1a428f30e"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456925 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerDied","Data":"c9a695d4652da7db7f3ebcef0da143cf28a9dbbbb25aee4013a1e44bb00f1e39"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456937 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bt7wn" event={"ID":"9635cdae-0983-4c97-b3ed-dc7a785b1bb6","Type":"ContainerStarted","Data":"6b78ee1b02c98b4ad9c3b944fdd43e9881371557e0d7b10564d5be8bd02396af"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456956 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"38ff2aa460824904a5715b2a8594c19ce1e116c5bdd552d7c90a8ae16b6aad9d"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456970 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerDied","Data":"7c71ba6860012685e763d6be0a28f9f4eedf51541e431293b43883fadda65c94"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456980 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"e1023ad8b9dfcd1efdaef7585b1ccb0926083452bae0127b8861f9fbe05f41e3"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.456990 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" event={"ID":"2d125bc5-08ce-434a-bde7-0ba8fc0169ea","Type":"ContainerStarted","Data":"5a96373b7ec998e4c12966e11a5d5e48263b669f4268036f6aff8f1f1199dfa5"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.457000 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"294d5c130b0b65fdd6ffb533a9f65b52d295ccc3a6eab6b7ca1618e56519b844"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.457012 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"801adefeaa867d4ddecc5aa6ca06902111266589a16c1a6d41af9de695634c0f"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.457022 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"a041326266cad8376feb00367e376ef0928972722fd2a38761524556e9a05575"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.457032 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerDied","Data":"977167918f7e6bd33389cf095bf0a1f6441c8367a8bb9ad4ad8439f4003209b0"} Mar 20 08:50:21.456999 master-0 kubenswrapper[27820]: I0320 08:50:21.457045 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"8413125cf444e5c95f023c5dd9c6151e","Type":"ContainerStarted","Data":"9d5e5cd531f78ff97bd0331258baf0fd5a066b5864af8128f7ac14fe1eeaebc5"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457056 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" event={"ID":"41ac891d-b41d-43c4-be46-35f39671477a","Type":"ContainerStarted","Data":"9bbc62f41eb9cddabece6ee46b25a672dc565f68843a8cfb4ee6a9d70bc8ddf1"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457067 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" event={"ID":"41ac891d-b41d-43c4-be46-35f39671477a","Type":"ContainerStarted","Data":"b7b1e72d13c6e7c1a14867c5547562b82b9b40ac636f0328d795dcff8a14b2b8"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457078 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerStarted","Data":"a5afc30410e3001b1b13acca3a83e98ed554d83f6974859bb66e210875cbc977"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457089 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerDied","Data":"cf84a262e3cc737c426a3ee34816aa6cd8e8defa929f970e838849ec973bd55a"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457101 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerDied","Data":"b058c3dbb12dfe93f678a1cd234084a98f5f906462ebd3bf89f71382d647769f"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457111 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-clrp2" event={"ID":"4f6c819a-5074-4d29-84c8-e187528ad757","Type":"ContainerStarted","Data":"91fac3ac168ae944870f9f36626feeac950c7dd66eb021a2c366427ace9d7f09"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457128 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" event={"ID":"890a6c24-1dbb-4331-952b-5712ac00788e","Type":"ContainerStarted","Data":"056a248e264cdde362e6f3914beaa4b2d0c7a756342a561e27e19d7c2d2f2578"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457141 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" event={"ID":"890a6c24-1dbb-4331-952b-5712ac00788e","Type":"ContainerStarted","Data":"a18a8566386d7a1543a333d653f930cacf853d0d35feb9b3f545b9c786a7f62d"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457153 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" event={"ID":"890a6c24-1dbb-4331-952b-5712ac00788e","Type":"ContainerStarted","Data":"3e688aec660d80e985fc8687f7a00a0c0c268a922d791a77e1fea2fefa9b1c28"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457164 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" event={"ID":"0ad95adc-2e0f-4e95-94e7-66e6d240a930","Type":"ContainerStarted","Data":"bf10888ccde1979b427a9f3adbf9a108bfcc6b88d387b1a05a20f1ae280a50fd"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457177 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" event={"ID":"0ad95adc-2e0f-4e95-94e7-66e6d240a930","Type":"ContainerStarted","Data":"56997bf494f7ffbedb66bcbf6610659e36f8f3fa9ec2d8530300e2d0acb9f78b"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457188 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" event={"ID":"0ad95adc-2e0f-4e95-94e7-66e6d240a930","Type":"ContainerStarted","Data":"0b7224d61042a39a60c82074ae340c4880414bef01c57e7834a8075a7d391421"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457203 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" event={"ID":"581a8be2-d16c-4fd8-b051-214bd60a2a91","Type":"ContainerStarted","Data":"f6be40a3faedb0919061fcd476f3dc16b4c5b58871784ce038ebfa438e16e89b"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457214 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" event={"ID":"581a8be2-d16c-4fd8-b051-214bd60a2a91","Type":"ContainerStarted","Data":"cbf54fbc4b42acb493a316042d264353dbe32db5138e1dcba4a3aa56fcc561e7"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457225 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" event={"ID":"581a8be2-d16c-4fd8-b051-214bd60a2a91","Type":"ContainerStarted","Data":"bce60995e913b204c4470a4a4b36d406c096a66e95b110179e1a1c0fbcc39e0a"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457236 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" event={"ID":"0e79950f-50a5-46ec-b836-7a35dcce2851","Type":"ContainerStarted","Data":"b497bdf3019e13a087cc9efd50638831fd098ff627001f7158b4b8c8dfb030f6"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457248 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" event={"ID":"0e79950f-50a5-46ec-b836-7a35dcce2851","Type":"ContainerStarted","Data":"8c017564ccaf2dfeb210955c0086b315e3a3b5eaf1252770e0ae8f1b0562ed57"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.456923 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457275 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" event={"ID":"0e79950f-50a5-46ec-b836-7a35dcce2851","Type":"ContainerStarted","Data":"f8f3e1fa6ad1dbd5474f44502cbcf37e1e64719e20d78c379498d77edb6fab10"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457596 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" event={"ID":"a88b1c81-02b5-4c85-9660-5f84c900a946","Type":"ContainerStarted","Data":"35e3b72d6f100be23bd1145e2e51a31d89227d5b6bd38863566a6aae0e8bb2e8"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457607 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" event={"ID":"a88b1c81-02b5-4c85-9660-5f84c900a946","Type":"ContainerStarted","Data":"cc3aaa60f67e217ef3d18081141f0651595ce0154087c67a825630ce7bdd66f3"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457621 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" event={"ID":"a88b1c81-02b5-4c85-9660-5f84c900a946","Type":"ContainerStarted","Data":"08f22a0ccc0a77a9d6926aff6fb98f22a2c178ca54d526014d0e05d9f976123d"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457634 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"012a5b768c0ee0c2fea0a0efdf9099347e45d4700bf345a081f7cefcb6ff719b"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457645 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerDied","Data":"43bec40b593829fc4ae8b2676c3d74b6d0bc176c4e642877e74797d8bc72bb1e"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457659 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" event={"ID":"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9","Type":"ContainerStarted","Data":"ebb4000c1fd7b5e5958a1f721b8b2c7b7ad72ac397418d062d7c94f2eacacc8d"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457669 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"cc2e4ad2be0fc6a3ab9cf5e8dc60f935e01ae59dcef65e15f3ad03bac2eff189"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457680 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"5532c9fde716f197f1ceb62814f4dd124f4c022d390fb2e9bb6de856aae50715"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457690 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"1ce54a5590826875560432ea744f2460f27a35494ad527707d35fa0bc9c9518f"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457704 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"71492ac2213cc400e251902d25ef6b6543e9174f35a2747f77655dffa54c98ae"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457714 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"b8db8080b200536a9377078616125baf2af90c4794ebd829d7c5733866acceb3"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457724 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"ffad94ed7dd07d28c05d487c0a64bf6261be7c124b5aa2806f67a670c439c855"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457732 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"cbb0777d86fe8aef7b45d0b9716a093118e993114d1cf5dd7c366faf98e23cf2"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457742 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"8126274bfe0fc18cfc9cd1bb527f1c5098c2b15352a76b2b0bde84131edc6361"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457753 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerDied","Data":"c61822f24caad65a896a136b258da1c07b65503ea37e7992a32f53bc007f40ea"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457765 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" event={"ID":"d26a4fce-8eed-44d0-96a3-40ffd0b336a6","Type":"ContainerStarted","Data":"45bf1a9ecebdad7d1d939a42ed79f1d565faa93da259016b6c3e11a9010e1c03"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457777 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"6abdfc219807d34bb658ce6361b01fcfed8f3da0de196ca2575990ea57791b92"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457790 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerDied","Data":"052c5ab7353e85c711ba5bfca92fff712af9b1bed63f53526dee82d528399bb3"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457806 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"8da49bc1918ac91b4f777d9bf67f42c03551f09c69724ee02ff7ff48ea061fb1"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457816 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" event={"ID":"f202273a-b111-46ce-b404-7e481d2c7ff9","Type":"ContainerStarted","Data":"60a1d6091214ba0d82b66a2af63314a1cb99c1cda6a15d65a6539891ce5e3510"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457826 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerStarted","Data":"cf612c2c87dec99e0f687ab2c295fa164bf4e78d5be8c83e36f46fc0677adeb9"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457836 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerDied","Data":"d20a1459d97b6e06a1f2acdb938648d68b1fc12871ed4ca115c971b404c404f0"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457847 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" event={"ID":"20ff930f-ec0d-40ed-a879-1546691f685d","Type":"ContainerStarted","Data":"c1a1f09a0076728a7605f14aa2f5e1e4e67f07959fea6d30401da7eae836cc1d"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457857 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" event={"ID":"0f725c4a-234c-44e9-95f2-73f31d2b0fd3","Type":"ContainerStarted","Data":"26055e3a0db35a38b3a239692e9a4981d421d70eb75773637c0ded0f0062866a"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457870 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" event={"ID":"0f725c4a-234c-44e9-95f2-73f31d2b0fd3","Type":"ContainerStarted","Data":"3be597dd6af294be5c2ca7f07c208566e2ea40f0cb53618a5e8df432f0f812a2"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457879 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" event={"ID":"0f725c4a-234c-44e9-95f2-73f31d2b0fd3","Type":"ContainerStarted","Data":"7ed933ad5ab2402e750d28bcdcc40b75fc2d12d35fd030d2dca7b16f6da20585"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457890 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerStarted","Data":"faf09e106d65c4571e61ba7edd1e3e65e2581a35b5e358479da5c8fdd5be26ac"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457905 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerDied","Data":"59a8653cd7835805f3353ca3030def7794cc3d5df739fff211964fc11ce38845"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457916 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" event={"ID":"65157a9b-3df7-4cc1-a85a-a5dfa59921ad","Type":"ContainerStarted","Data":"c29f56d4ea9bf3bce066e5fba5216f6d81c3f45eb82e43475a2e438e6dc2d99e"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457942 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-j7ngf" event={"ID":"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf","Type":"ContainerStarted","Data":"8411d2dd0c86e582653139cf6127982eee541685df953056b9260c08a6ac30e6"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457954 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-j7ngf" event={"ID":"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf","Type":"ContainerStarted","Data":"7b1754d73309bcf271978edbc6de885b4c5c9259799d13505300a0b3d8fb40d5"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457964 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerStarted","Data":"d5a6da92a647ffc6da1361e5e0378499aaed4a31f29ea5931a4731314e925480"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457976 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerDied","Data":"a76cae891ac1ac170b3b4bc00acda8e3f7397c5dae09b35ed265abb8477e72cb"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.457989 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" event={"ID":"23003a2f-2053-47cc-8133-23eb886d4da0","Type":"ContainerStarted","Data":"4aa19d8b0c30c05ccc496b8ab2be76d947099982ef9343125f8b5117bc386c97"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458001 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"cce21ae1-63de-49be-a027-084a101e650b","Type":"ContainerDied","Data":"08b76c47992e775acd809c6af275e2c7e9a0096419764ac5862de8d43565af46"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458013 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"cce21ae1-63de-49be-a027-084a101e650b","Type":"ContainerDied","Data":"ca41c67c83bd762137f7fd4b62a8f992e4f4eaa7271546ffae17c37b0db5004e"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458023 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca41c67c83bd762137f7fd4b62a8f992e4f4eaa7271546ffae17c37b0db5004e" Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458033 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"fae0c983-2cb4-4749-97ff-a718a9fb6563","Type":"ContainerDied","Data":"8db9b6351ac69b67c8e87136c1df3fa9a0513a97038d7ea0f58a226f57e933df"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458043 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"fae0c983-2cb4-4749-97ff-a718a9fb6563","Type":"ContainerDied","Data":"4a544ba88b612fcc7b9a0c05b171f124d77f9977d6164c6ef4949c3839565381"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458053 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a544ba88b612fcc7b9a0c05b171f124d77f9977d6164c6ef4949c3839565381" Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458062 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"84b1b51a-cbfa-42de-9fb8-315e9cb76b58","Type":"ContainerDied","Data":"9195f1dfc14cd53890895128ba6b2082162a13670d2ec403d7a28c0918592666"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458072 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"84b1b51a-cbfa-42de-9fb8-315e9cb76b58","Type":"ContainerDied","Data":"84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458081 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84e96bf2ec3bb1718be1185663e6c7f2bf6b412dc2a929eaafb13184c995f8ec" Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458090 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"f0130280798962f1e4594514991e3d14785663897b5381945aa07cd2f793d6cf"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458100 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerDied","Data":"fdbecd46c29424d901b7160c849f7507fe2bca8ead0c17c0f2a34bfa2349bd5b"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458112 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"bcc2923b1a498cf503f717e7c6dfa4d93b5d5620211265110c6112b306cbe70c"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458123 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" event={"ID":"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0","Type":"ContainerStarted","Data":"88728a20ccc0653acaf97665b53dae69b14ad65649feac36dc7ea652a98e2296"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458134 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerStarted","Data":"7392c45b64b53a9362843cd0cf092dd845b1e52691896714689fd92f01fce88d"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458145 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerStarted","Data":"2797028adeb7afd0ff2813fc2ea1cae0a2f80e41616388fb1a6cfacf98dcfbac"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458156 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerDied","Data":"3d7b06fc76103946132a85d04845bb83f54fb34b66bfd2a1c6aa9a2bee7fdecc"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458169 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" event={"ID":"ca56e37d-80ea-432b-a6d9-f4e904a40e10","Type":"ContainerStarted","Data":"39979795a082384fa347e48c6bcdc4249850e6dc951d407d07457e2b43d36f11"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458179 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" event={"ID":"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777","Type":"ContainerStarted","Data":"01e8cf94507b9386c5036a989e6960cf6155ad61352527634f11a8530a65c542"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458190 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-zgm52" event={"ID":"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777","Type":"ContainerStarted","Data":"12a5bcfb40c6199c579bd08c62b8bf6bb5bfdc6c365125f24ebc7113f94fcd35"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458203 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="314750eb53635940d2e5e7382cfd93fd0e5f6effe69fa93e88c8c6eaa8362332" Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458215 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerDied","Data":"c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458228 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerStarted","Data":"46a769eaa885d6f2aee7986a052f5cb914f5503a0051214e8b4e113fe0f1651a"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458242 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerStarted","Data":"b1a6bfe0069db4370471806f444b8cbb38ac33f0aab60a3239aafba8901aaf7e"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458280 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c74c8aa017803f478ccd8093ddb6ce42a0913682f0794b7a17848c918f0bd0" Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458291 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"158152aec5255c0c0f30836ac85f1459094c2aa62d522d1d07878c2186af6949"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458302 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"b6679dee4c7242c42fbbf5bfbe50ea41b4c18c644485784e958d4094ec76c7b6"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458313 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerDied","Data":"ae2ac69f50f92b147c6e0e54b3efd3a8fd07b958b39ed5539b1432cc17005897"} Mar 20 08:50:21.458142 master-0 kubenswrapper[27820]: I0320 08:50:21.458327 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" event={"ID":"f6a6e991-c861-48f5-bfde-78762a037343","Type":"ContainerStarted","Data":"54f91a8b386ea81f3c1ff44f7cbcccad1987fab184d5bfad4c46374f7827fa5c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458338 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" event={"ID":"56970553-2ac8-4cb5-a12a-b7c1e777c587","Type":"ContainerStarted","Data":"aefb7263e405401dfc5bdd3b4b914906cd92422736de11f54dfdd5ed1b7c6555"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458358 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" event={"ID":"56970553-2ac8-4cb5-a12a-b7c1e777c587","Type":"ContainerStarted","Data":"3de8fd7dba2a402c88acfef1b2fb538d0415318dc0e8061e2e031c469a39d9cd"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458368 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" event={"ID":"56970553-2ac8-4cb5-a12a-b7c1e777c587","Type":"ContainerStarted","Data":"8c5a039db74fb9e788a5aa01defc8a1f9fd1088c2644177e24de4994f3a27cd3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458378 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerStarted","Data":"4a40e6321221f3cde043850333e4ac8f894dd11fc7405d427009dd08b18900f6"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458389 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerDied","Data":"1e83bbe7ff1cdd771e7b861105c79c9f038ba7c1e62e6423e1143134dfc130c3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458399 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerDied","Data":"91b43fcbdd5ca279c1c93dfa907a3ddb56ecb16c22e3a3346f458ab45ff2c368"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458408 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hj5tl" event={"ID":"64d09f81-5fb6-462a-a736-5649779a6b1a","Type":"ContainerStarted","Data":"d80ff220fd3e8f28273c0ca55518a106e85c715a741683e145d0d50f2a0d250e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458419 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4","Type":"ContainerDied","Data":"6431ba0942f1d93ec67e79edabc01c308dcb065395ccf7185622d3bd7f0075b2"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458431 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"5cdd5ac8-4c2e-4680-b697-0e5d94136fe4","Type":"ContainerDied","Data":"4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458439 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7c354bc63790dd2c841c517050b35e106b034733574a9ee401496eb49f2861" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458447 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gskz6" event={"ID":"41253bde-5d09-4ff0-8e7c-4a21fe2b7106","Type":"ContainerStarted","Data":"d9e3175f5786280bcf8e7ae3ed2dc3e7aba803ae5eb4d96e967e9d31611a12c9"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458456 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gskz6" event={"ID":"41253bde-5d09-4ff0-8e7c-4a21fe2b7106","Type":"ContainerStarted","Data":"9ea6c645be9e53fcf3d53f94ed4084999970b2edaa109f3c2638c7e834bf375d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458469 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-gskz6" event={"ID":"41253bde-5d09-4ff0-8e7c-4a21fe2b7106","Type":"ContainerStarted","Data":"a7182dd72430d58b49f5e018c12acac4da1770843a5e54cf2decb77fe298b875"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458478 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" event={"ID":"240ba61a-e439-4f94-b9b3-7903b9b1bc05","Type":"ContainerStarted","Data":"03e9b975d965f8ec377b68d24d75b91897161659c0e305cafe8e9368b9999d09"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458491 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" event={"ID":"240ba61a-e439-4f94-b9b3-7903b9b1bc05","Type":"ContainerStarted","Data":"a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458500 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-j6hxl" event={"ID":"2a25b643-c08d-462f-80f4-8a4feb1e26e8","Type":"ContainerDied","Data":"9c4160ccfce4a1ed7d4a8b39bc1968845b7b8a2ab8792b3e93cfa7765e5fa689"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458511 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-j6hxl" event={"ID":"2a25b643-c08d-462f-80f4-8a4feb1e26e8","Type":"ContainerDied","Data":"8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458522 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e65cad05e20b0abbcb49f3fc98be5a4c3f6421a23b1da41d9039f8ff62b3093" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458529 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" event={"ID":"57189f7c-5987-457d-a299-0a6b9bcb3e24","Type":"ContainerStarted","Data":"e48d6d8f20331461db8cc13ac230338d41f64ea61a17d13477ff37631d86ccc0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458539 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" event={"ID":"57189f7c-5987-457d-a299-0a6b9bcb3e24","Type":"ContainerStarted","Data":"389639e7370bc064e8396447b56eef169b57f40bc06761ec99b4a5fb5deb56a5"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458557 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" event={"ID":"7ab32efc-7cc5-4e36-9c1c-05efb19914e2","Type":"ContainerStarted","Data":"0d8f00d5770f6ae7f6068bb266931b98fb82f37747584485f97ea270f43d2a15"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458567 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" event={"ID":"7ab32efc-7cc5-4e36-9c1c-05efb19914e2","Type":"ContainerStarted","Data":"a48f9dbca67b195cdbe5106389856adfd54422aed83fc92bc09057a87eaa2faf"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458576 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxqwj" event={"ID":"7949621e-4da6-4e43-a1f3-2ef303bf6aa6","Type":"ContainerStarted","Data":"87687aad12c871b51f38e96592a82bdee6ee41cb3015da390a35f50e9ae27334"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458586 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-pxqwj" event={"ID":"7949621e-4da6-4e43-a1f3-2ef303bf6aa6","Type":"ContainerStarted","Data":"f1cd5ceb84540f7c9e7a009d076e0390ec979230bb207211f3a50905c2ec9f83"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458596 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"75cef5aa-93e6-4b8b-9ab1-06809e85883a","Type":"ContainerDied","Data":"dc68fd475ff9f6055eceb076d1b60266600d047f4d29a9bd68c9771cc87efbc5"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458607 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" event={"ID":"75cef5aa-93e6-4b8b-9ab1-06809e85883a","Type":"ContainerDied","Data":"1f303ba8c534fdd01d1d1d736d392f617339c8123f70b84cbefb43516aed9bd0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458614 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f303ba8c534fdd01d1d1d736d392f617339c8123f70b84cbefb43516aed9bd0" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458622 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" event={"ID":"6d62448d-55f1-4bdc-85aa-09e7bdf766cc","Type":"ContainerStarted","Data":"893d8918886f5436f953dcd40251d5f2e4dbf4607b1d7637a866a6322cb8f13d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458633 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" event={"ID":"6d62448d-55f1-4bdc-85aa-09e7bdf766cc","Type":"ContainerStarted","Data":"e752098827604ca63ef6b84cdd36804c65e5654f7ec3055912844eb8b6ef68db"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458642 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerStarted","Data":"d27bdc77827c5790d21f30a5be51defed98d4177328db7f60819cdef6d3d4084"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458652 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerDied","Data":"36c39e8f2f6bf69b1f66f4972d7671c5d3fca0023fda940a2b8538766e8e200d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458662 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" event={"ID":"fec3170d-3f3e-42f5-b20a-da53721c0dac","Type":"ContainerStarted","Data":"9540823dea8e0108833218a65d98423f8d996d846bbeaa47cddd4e7ba48fd916"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458673 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458812 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerStarted","Data":"173da4e7be06cca34fcb84231efee897dd1fd16593112fe5c00528ddc53e0f96"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458827 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerDied","Data":"a7ec1ed13e0a355d823b781b053862cbbd8f7a00b211a40b600daee7dc545186"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458842 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" event={"ID":"e9c0293a-5340-4ebe-bc8f-43e78ba9f280","Type":"ContainerStarted","Data":"5baf379ef595e5427aa5f7376ffa996583f39c05c81ca9fe28df973ed2c426be"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458852 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"fddcb721d2d443762155607f4a14cad4d4ea3bdb47b65fbf890c05cb02cccdd7"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458863 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerDied","Data":"ce2fcc1081bfcbeb7f4d07807c1a93a611637f696cdc2c93642a97a10714d449"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458873 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"bfeca22e5c430d4bc0fffa7a152cc4559e40218ee50bb5357e4fb7fc605dfba3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458882 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" event={"ID":"08d9196b-b68f-421b-8754-bfbaa4020a97","Type":"ContainerStarted","Data":"5c5ae9bfcc3ce85bdfe3cccc194f20c35db6cc7998e4967e566b59f8729c9691"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458891 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerStarted","Data":"5283e40c5cc77cdb39d96a842e1d4a3b90fa78d7cd6f57c6b779fa0e23ddfd45"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458902 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"66935bc88a172084ce89ee3474a8817878b895f87e27bbd9f994bbea54a28d58"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458913 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"1ad464d19cae2361db03cbce68a3a46d3a3a7e57495ff1c59b795128f430f3c3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458924 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"633e246d0eb69524c4e825553d8b2a17d7166e97b618f96a41148d7625aa5ed0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458932 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"49a024c7c79250dd61c634f6e633e0edd247a3c463686f54208b638a2fd19ebb"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458941 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"e286a3213c5346d10ff0d6cbc953c4d1baa37806e4134a08a01aa0b21b03e73b"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458951 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerDied","Data":"40ff7a57f1be617cf7f13a7b182aa09a2d94c4736efa61da1185a107268ed08d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458959 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-x7vrg" event={"ID":"22ff82cf-0d7d-4955-9b7c-97757acbc021","Type":"ContainerStarted","Data":"c47fa190606cd38023fc533f65cb7825afa7c8fefd6bf8e60afbd6d31f3e48e7"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458968 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9775cc27-53b9-4d21-a98b-84b39ada32ee","Type":"ContainerDied","Data":"8b5711cce3fb17d8c5298b374ea763f137a6631ab7f8f0ff687f48b345639df0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458980 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"9775cc27-53b9-4d21-a98b-84b39ada32ee","Type":"ContainerDied","Data":"3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458990 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b8a06244c2e0be584b6e088f930643d0f41b0d380a9aaaeb548ef7b6339ddb3" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.458998 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459012 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"8e7a82869988463543d3d8dd1f0b5fe3","Type":"ContainerStarted","Data":"26ead2d551cb5e798df939fca56e343afb667dfeb7405f006d41371d076ea7ef"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459021 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459033 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerDied","Data":"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459045 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"4e3989004e344d411038c9d1f6a6052a86aa8920b399e1afd650c22f18779f11"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459062 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"b54793d2ae72eaa686a000eb046e04a8533997e94a640f1f1144e3a41428dff5"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459072 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"5b029982bb8223e2d49a4aec4d3e62ad49f8bc617c5cc9b42609f637cba43a3d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459081 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"9adde0990da1601e7c45d9ff5871aad1c483c142165792a1910a5516c06340cd"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459091 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" event={"ID":"44bc88d8-9e01-4521-a704-85d9ca095baa","Type":"ContainerStarted","Data":"68a469f8af4eca3cd7046b1dcc688320cbfedeec29ee252f144fe6c1f8fce66a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459100 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9xlf2" event={"ID":"b097596e-79e1-44d1-be8a-96340042a041","Type":"ContainerStarted","Data":"91146652d4d8a8a47620378773d0a419398c4e57461915eca0a376f8bd53b8e3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459110 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-9xlf2" event={"ID":"b097596e-79e1-44d1-be8a-96340042a041","Type":"ContainerStarted","Data":"7c5fe5a51a0646232d6aeb7457e06eaa7bb1c6097a67919150bb37fc9d450327"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459120 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3ea52b89-46f9-4685-aecd-162ba92baaf5","Type":"ContainerDied","Data":"5e7daf3466466f866a8a609c3357214ad22e67b72e11f87494389948c897e7d2"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459132 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"3ea52b89-46f9-4685-aecd-162ba92baaf5","Type":"ContainerDied","Data":"97ecc9dbe142a6967704accd994983e2161bceb749ddbf66e1756c81c1a78964"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459139 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97ecc9dbe142a6967704accd994983e2161bceb749ddbf66e1756c81c1a78964" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459147 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"4af0d14e2080acfab9b4be1c21f5c397bb2b57510a3ab1d14b3ae883125de902"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459157 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459165 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459174 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459185 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459194 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459204 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"407f7a172ca7923af3036a3a5081e3f6bc925e32d3851562fb93dfcb79785b17"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459215 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c09f65ab934e524195086a580c2b9eb85f8f4d50711b33b3da17693c9ad9000" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459248 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"ff9b4472f4baf6a0787735489d4002af06866f480fa96bcc3697cf3d30594373"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459335 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerDied","Data":"45ed4d229b5e4ffda3ad9ee3a6c6c79dd79e664c69394337cb1c4fc4b2036f31"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459350 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"27b219397d2ded697fb8c63422f6fe333badb02574d5af0d32c7a5d157330ed0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459359 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" event={"ID":"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc","Type":"ContainerStarted","Data":"ccabd735cd283aaf872e4d4c6439fc21d25d047aca8d8580112cec5049c44ca7"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459368 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerStarted","Data":"bdf77adf82af986123c5cdbd1878d0f52d362bf1971f8f8cc55c1368284c4f5f"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459387 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerDied","Data":"6a9d899a8eb10974cc6c4342f48d72d6fc952b94defbc645eeeae9b0a3d84f6a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459397 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" event={"ID":"acbaba45-12d9-40b9-818c-4b091d7929b1","Type":"ContainerStarted","Data":"a5a71eafba7fd094c1b9785d7c1fd9e98b46812d646ac6843a8a763f472e8750"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459405 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" event={"ID":"bca4cc7c-839d-4877-b0aa-c07607fea404","Type":"ContainerStarted","Data":"31ba0046a64870a1c833f3e20714b8bf32a17da8c12ef6cc43c140fd13d24a10"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459415 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" event={"ID":"bca4cc7c-839d-4877-b0aa-c07607fea404","Type":"ContainerStarted","Data":"1d602414649c8268857260746c9b07c7eebb871e3592e5e80020d1637e9816cc"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459426 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerStarted","Data":"778dc6f6c0022cc9b874b3077c1b8afb784cc22d5931163c42ddb6f97b21e827"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459443 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerDied","Data":"0f65346a38596f758067a95721b4b8d598991f6450f547c5688592057337ba23"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459453 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerDied","Data":"31eea8f8908cce83a9e43c16d0440c72175117897d7cc72e9c66a228fb48965a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459464 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-chfj7" event={"ID":"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc","Type":"ContainerStarted","Data":"e9929238f90c11cab18d39ce438681158ac972414d58be4a31cbc595b70dfab3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459473 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerStarted","Data":"3621d7f7293e781b2d6fe9b7f21003a8c6b2d5ad582ef3317c995b2d1b65c2ca"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459482 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerDied","Data":"574f438252b4f47fa3b61032cc6a4a935112d82ebdef8b14155e36ebb82ca9af"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459492 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" event={"ID":"61ab4d32-c732-4be5-aa85-a2e1dd21cb60","Type":"ContainerStarted","Data":"23b5f0e312ee437adb179ea398b2301b1690487e9d814d24ef554192ded477e8"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459501 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"f424e7513ec0e0aa5fffe62eeb72b57d6bda17b11a356b4440d33b066290beeb"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459511 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"612932528a87c37595e1e39af79797bdd3b69b1320150874acd6b11a8312b742"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459519 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"f5f32b4aa675a44318fb714c2260768744660087826b3e03d8f23272cd36e48d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459529 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerDied","Data":"8318704eaa08899e772deabe42128ea1b882f7234facbd87ca64f6d3f0952a1a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459538 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerDied","Data":"52016baf23be09eb560f695ee764aa3c366d61ff1792a482aac5922ed083323d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459548 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" event={"ID":"6163bd4b-dc83-4e83-8590-5ac4753bda1c","Type":"ContainerStarted","Data":"6018dc62d387a9b77f99180b9b59d3182e437f628eb7fce91bb3764fe4982ba6"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459556 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerStarted","Data":"1c8cedded5d5abf2eccbffe1fbbc3baf4454b4117f7fd84b851525034c732747"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459567 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerDied","Data":"38ba09231d63afd93a0205a5845a80e4d47fa8290768d886cf1c7ea448f682d8"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459578 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" event={"ID":"09a5682c-4f13-4b8c-8179-3e6dfa8f98db","Type":"ContainerStarted","Data":"7551d0384a0ca5d55a0e01a66e0811b519b2e2c926c179ce2206a11d57d556c3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459595 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"92600726-933f-41eb-a329-1fcc68dc95c1","Type":"ContainerDied","Data":"37909af3090055d773495c88ec18992da7d8fea5935c4a6afb5893aaa0a777f4"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459606 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"92600726-933f-41eb-a329-1fcc68dc95c1","Type":"ContainerDied","Data":"63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459645 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63ca98d427169faf58092c01f84942dafda71aa92ee3b32d26fc1e746d40ac75" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459656 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerStarted","Data":"07d5eea0c0cfb0e4a4276e2ddf85f3db59e86b2664aa6c609113a0a0c2df000a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459666 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerDied","Data":"51d5ff19316ba50d65a137d07edaf8d44d3c66d7ea87669b610c77e6e7a5026d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459628 27820 scope.go:117] "RemoveContainer" containerID="11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459884 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" event={"ID":"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8","Type":"ContainerStarted","Data":"f50b5162b61414bd7ea44a7ec549d8b7fce7a639d564096b29f4d95c071c3604"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459898 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"222b6476c5a428c92fd2ca0d4be351ebb99b0254111d7d351670afebd811ebce"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459907 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"a2952a1f800d26a7eabecb79481362c19e0a6b58bcf79619acbcd04fc4857ada"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459915 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"e5ad9bac3cbb02a4a670b1d82260116acc5d8d83eb8a7b2d3edcb355e30555a7"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459922 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"833277a9c6e114a77d1b2ffcc162732dd849bab5619d63dd4b6f773ac9bb547e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459931 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"33ee802a48139e3bfd946165cfee9b10245c4e17272752d17f0aadad7163bcb8"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459939 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"3102a41a904a505c496c9e6ff056d38d7935cf53ed7153f14bbd8b5057d5541a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459951 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"d88e93757522ae39b5517291f3c06f1dd6bd6427800d2bd825b8a5c55305f18d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459959 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerDied","Data":"89681a264aa64084b2aa38ba642cb89ce6a4bb719fa716689bf3853f8249b887"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459969 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"094204df314fe45bd5af12ca1b4622bb","Type":"ContainerStarted","Data":"3e9fa4fb66ba86c033a4b55b0ef6ca5cbcdcfa8e9fc2ffaaf2fd90f6913d2947"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459979 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" event={"ID":"45b3c788-eb83-448a-bc60-90b8ace28382","Type":"ContainerStarted","Data":"4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459988 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" event={"ID":"45b3c788-eb83-448a-bc60-90b8ace28382","Type":"ContainerStarted","Data":"86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.459997 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6bd59" event={"ID":"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d","Type":"ContainerStarted","Data":"0cbc6ea3aa68035035f3da1cfce1750cdbc80b56e682b2ecd9f2dcdc8b0d9d3c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460012 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6bd59" event={"ID":"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d","Type":"ContainerStarted","Data":"1c0c62dc18b9dfbe34d230533e11381c4068e1290418832f6c146c6c5c6872ee"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460023 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerStarted","Data":"aa25d7e3f5d62bcd63da255d522829c6196c34440f15366acc71e4890e98fd5c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460034 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerDied","Data":"eb83d7b52ee34a208a7d7d8320582445204a3a3c9a564d3c4ad584270b43c58c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460045 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" event={"ID":"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c","Type":"ContainerStarted","Data":"1522904bcce5d0ac5aef96a7d518d8795f633c0cc736ad3114aa64de5474f52b"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460056 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" event={"ID":"b6610936-e14a-4532-955c-ea1ee4222259","Type":"ContainerStarted","Data":"b4ed4cc8ebcde50bb6c3f1d2a2733df8bc54de93fcabc1096f2cb5082755e2e7"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460070 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" event={"ID":"b6610936-e14a-4532-955c-ea1ee4222259","Type":"ContainerStarted","Data":"00f99a8d9909bbe9478bce6dc5de763850cd760e9c80771cbc6b2cedb9160c52"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460079 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" event={"ID":"b6610936-e14a-4532-955c-ea1ee4222259","Type":"ContainerStarted","Data":"b36d4f6b43dcaa09ca3c55b7c20167210b34481854d09dfefb8adca147e001f9"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460089 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"59ba7bd4aa39cee5c1de95c7109004a23d309869d6116da7e8f294aa326ed6b0"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460101 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerDied","Data":"1b69d08e43e09461f0726c1193441ee601de85f0b5b8a1e604d076708c64775f"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460112 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerDied","Data":"79d1a04f16780a30204d3fb5aa6261f513e7c954544e8ecbd91d389cc77dbe03"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460122 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" event={"ID":"3065e4b4-4493-41ce-b9d2-89315475f74f","Type":"ContainerStarted","Data":"f13b0447f1cf8ebd279a6530a199c8c8c26e292eacc831f21854583254577b3a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460132 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerStarted","Data":"76181bda8b0461451532ad4e02386833ee9734fb65df65a856a237e3dc22fff5"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460145 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerDied","Data":"a1b4eceb0f2328786d0d5d45adc257b068090b4a532ca9b2a6eb0db19b8abba4"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460156 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" event={"ID":"1746482a-d1a3-4eac-8bc9-643b6af75163","Type":"ContainerStarted","Data":"b9cc3cdb71ca86a1d6eb5065d5ba830d901adeb7f41acd8f39de6f44ff6001ce"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460176 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" event={"ID":"9ce482dc-d0ac-40bc-9058-a1cfdc81575e","Type":"ContainerStarted","Data":"f73f25708579a25c6b06011558340a049bc18814ec77f148fd1c4ea077840f7e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460188 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" event={"ID":"9ce482dc-d0ac-40bc-9058-a1cfdc81575e","Type":"ContainerStarted","Data":"c6de2a3b0d9d7c8a3099b56864fbf63ad49e41112ad7b49c1d03c4402aae817a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460199 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerStarted","Data":"385c2843df6d2571f0639723dbc7f7f479a4f4fa1607104835b240f51f444467"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460210 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerDied","Data":"cbb3f75129c9d64cd795c59facd72277d5aa4e6c03360f86cd3b579cb2e915c3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460221 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" event={"ID":"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072","Type":"ContainerStarted","Data":"0118b40880c157c21da0a1b6b65535a3a28545387d34a792b84d5f5f7d802bb1"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460232 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vzrlt" event={"ID":"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc","Type":"ContainerStarted","Data":"42b6664c06d1ffb8c94d13f40ec54767633930df25274e60e5a519f6d8259436"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460242 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vzrlt" event={"ID":"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc","Type":"ContainerStarted","Data":"b7d9c365d304102d31836e754ae3ccd0da492c6691ee23225b141aea9b82a5d5"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460252 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-j9jjm" event={"ID":"ca6e644f-c53b-41dd-a16f-9fb9997533dd","Type":"ContainerStarted","Data":"5c9558f1b9a116ee4941ce1e0ca288d98a890cf0f944820cb48b49066ed51f6e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460276 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-j9jjm" event={"ID":"ca6e644f-c53b-41dd-a16f-9fb9997533dd","Type":"ContainerStarted","Data":"8278eeebf68b018edbef1798293f552dd9859c6fa057a3f48528a25426e7abf3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460286 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerStarted","Data":"041d536d7a31ab427a13e6da1dbb01874ab9eb6236af8cb0e9a5a4754e2a0ca5"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.460297 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerStarted","Data":"62e3d29fd4bdc39b630fb30c9d703d2124f8b51733ff477225f86e593bad914c"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461741 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerDied","Data":"96058a0b48f5954e1e280e02b2139f100552b410ebee73d3b0fd6e4aa44bd764"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461778 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-rzg98" event={"ID":"123f1ecb-cc03-462b-b76f-7251bf69d3d6","Type":"ContainerStarted","Data":"cc5631c5f457937102021d26dc57a94d8eb433d4f0008126fd2dc1af0f5f1218"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461796 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" event={"ID":"5707066a-bd66-41bc-8cea-cff1630ab5ee","Type":"ContainerStarted","Data":"c35a92d30debfb7629245f7755d88359cda5ae68ac4c29098c6ed3194958cb7d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461811 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" event={"ID":"5707066a-bd66-41bc-8cea-cff1630ab5ee","Type":"ContainerStarted","Data":"33917a2945cfdd96fc5917acad69a7843047715ba145c81978cea2bef30f460e"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461825 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nfrth" event={"ID":"00350ac7-b40a-4459-b94c-a37d7b613645","Type":"ContainerStarted","Data":"f2c2ef80b2d5381aae9d20f86a2fb3626f8d02e8194d288ba0a38ca637403d39"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461837 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nfrth" event={"ID":"00350ac7-b40a-4459-b94c-a37d7b613645","Type":"ContainerStarted","Data":"b177047353db36f3ff10d6a164d468e06e55f0b60bd6bd6dbb4908d3c99f4892"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461875 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-nfrth" event={"ID":"00350ac7-b40a-4459-b94c-a37d7b613645","Type":"ContainerStarted","Data":"aab851b1602b7dcc6e5620b34b9265b9ec9a6fe42b3748c9be972ac30f7ef4fd"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461888 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"db34596c0384185b8be14345b1286cd07c682e48ceb08781c98125811bf47060"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461901 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerDied","Data":"4306eaa225527d3607228fe5a76b2f9df384e1155f171d8c00c7646ffafef9a4"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461964 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"b286a4895dfb5ad25ff94bbecb22ea3a5b89ba604a59910e8726e22ec7afd75a"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461979 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dq29v" event={"ID":"9d653bfa-7168-49fa-a838-aedb33c7e60f","Type":"ContainerStarted","Data":"31a46ba310ff197c87c66f84e5bd99a13a3ff1f8cbacfdf28d2bf427d9553306"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.461993 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" event={"ID":"14ef046f-b284-457f-ad7a-b7958cb82dd5","Type":"ContainerStarted","Data":"3db3dae8349b6f2fc1d58cbe0c7f2270fa08bb8391e64b4cb41d884ee532ec9d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462006 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" event={"ID":"14ef046f-b284-457f-ad7a-b7958cb82dd5","Type":"ContainerStarted","Data":"4a62432d7ca6978a89473ee0ca3560d8d6e151e4b44cc680fcbcde36344cda3f"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462018 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"169353ee-c927-4483-8976-b9ca08b0a6d1","Type":"ContainerDied","Data":"c35a5738f2f9a6fb340b75e09b70d5c9961a967d646e1417a2634fd74ebeb167"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462036 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"169353ee-c927-4483-8976-b9ca08b0a6d1","Type":"ContainerDied","Data":"eb298360c7626b678f9c8cf233db291ec09731cb94cf6c1ae69432ca7d42b080"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462052 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb298360c7626b678f9c8cf233db291ec09731cb94cf6c1ae69432ca7d42b080" Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462064 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerStarted","Data":"dfdc4b94584bfe91fceba0b4003dbc4b0093c6ad0366472d7fafe8f570e3cfb9"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462077 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerDied","Data":"9b538e53e002b24081578246c7d675b101b228304a8e87c5077457c1455c343d"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462090 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" event={"ID":"2faf85a2-29bb-4275-a12b-0ef1663a4f0d","Type":"ContainerStarted","Data":"b3076d6176cd94c8a21c722732d97de0437f9e83160ea4c57d3d59e61e4a74e3"} Mar 20 08:50:21.461854 master-0 kubenswrapper[27820]: I0320 08:50:21.462102 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"043be14a032d81fccfcece7242ed5d72370383c47bc3fa313fb28191f79246e0"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462116 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerDied","Data":"462a8070d6a4a84bd5f75252bfbebd1aeca669c870c803cc819af47a7fc47625"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462134 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"aaee608b72e4a4a07804432864d818368bf78848923bc47049bb495de57ed536"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462146 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" event={"ID":"80ddf0a4-e853-4de0-b540-81144dfdd31d","Type":"ContainerStarted","Data":"32d9278f90869a47d37ec354771e3c987fb65e24d65a9e7aa9b31e8b1fade86f"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462158 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerStarted","Data":"72528580eee79b6c6db5f632d94bdc5cda1b2d88e86bcc96303c96a1539c51c9"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462170 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerDied","Data":"3033684921b500c0cdc5a887bfabd7fc5e3c9f8cea2dfed120b0981d20756634"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462182 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" event={"ID":"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5","Type":"ContainerStarted","Data":"66f60747a10071044a32fdd3eb286bdb47b644ac36047fe8a2be062c88967367"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462228 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerStarted","Data":"b03cf792be8c09113845ece36250dd906916c30b14e37c0df43505b61e6139fa"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462244 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerDied","Data":"ede2ef38ba8d0fe732989d57db50e82ff2ef33b1e7f1869b8d140d9c93969650"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462339 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerDied","Data":"7eace203ad1dd45a1a683d8c3e7772a2d39b397eee68cf9a1c7862a15d7b007d"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462356 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerDied","Data":"903bd12c687f6625987bd7d1e46b200fb44d1a9e193c70ad2441cab58febeed2"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462369 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" event={"ID":"e9425526-9f51-4302-a19d-a8107f56c582","Type":"ContainerStarted","Data":"36fd86042cdc5d322b686c2b108009cab15460fe5a8fde9f08be705f3ff47a25"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462383 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"7316fd0f0f8a186ef4fb758bcbe38162f541b908e7728b02280dc9e29c6d0538"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462397 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"309e4777b97bbe0d7fb41e63077d3bc7d068d36eee7b9e7931a0c0261bdb0bbf"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462411 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerDied","Data":"f46237550fb6588ccbb218d4b52be58120b3dd1d98e107a7ca8477306baad5dd"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462438 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"1249822f86f23526277d165c0d5d3c19","Type":"ContainerStarted","Data":"a7302efa9940a01d2a7da47b5029499cf2581b6adb8641d99af963b893a64957"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462452 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerDied","Data":"4af0d14e2080acfab9b4be1c21f5c397bb2b57510a3ab1d14b3ae883125de902"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462468 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerDied","Data":"46a769eaa885d6f2aee7986a052f5cb914f5503a0051214e8b4e113fe0f1651a"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462484 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" event={"ID":"45b3c788-eb83-448a-bc60-90b8ace28382","Type":"ContainerDied","Data":"4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.462499 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerDied","Data":"d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c"} Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.464114 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 08:50:21.467136 master-0 kubenswrapper[27820]: I0320 08:50:21.465218 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 20 08:50:21.484934 master-0 kubenswrapper[27820]: I0320 08:50:21.484891 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 08:50:21.499662 master-0 kubenswrapper[27820]: I0320 08:50:21.499602 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-serving-cert\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.499783 master-0 kubenswrapper[27820]: I0320 08:50:21.499663 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.499783 master-0 kubenswrapper[27820]: I0320 08:50:21.499696 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rf9kc\" (UniqueName: \"kubernetes.io/projected/f6a6e991-c861-48f5-bfde-78762a037343-kube-api-access-rf9kc\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:21.499974 master-0 kubenswrapper[27820]: I0320 08:50:21.499911 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8plf\" (UniqueName: \"kubernetes.io/projected/b6610936-e14a-4532-955c-ea1ee4222259-kube-api-access-v8plf\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:21.500031 master-0 kubenswrapper[27820]: I0320 08:50:21.499992 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.500066 master-0 kubenswrapper[27820]: I0320 08:50:21.500031 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.500066 master-0 kubenswrapper[27820]: I0320 08:50:21.500058 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.500138 master-0 kubenswrapper[27820]: I0320 08:50:21.500089 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:50:21.500138 master-0 kubenswrapper[27820]: I0320 08:50:21.500118 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4w7k\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-kube-api-access-l4w7k\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.500213 master-0 kubenswrapper[27820]: I0320 08:50:21.500161 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.500213 master-0 kubenswrapper[27820]: I0320 08:50:21.500188 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-key\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:50:21.500213 master-0 kubenswrapper[27820]: I0320 08:50:21.500210 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-catalog-content\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:21.500398 master-0 kubenswrapper[27820]: I0320 08:50:21.500374 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-catalog-content\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:21.500443 master-0 kubenswrapper[27820]: I0320 08:50:21.500368 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:50:21.500486 master-0 kubenswrapper[27820]: I0320 08:50:21.500455 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/44bc88d8-9e01-4521-a704-85d9ca095baa-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.500486 master-0 kubenswrapper[27820]: I0320 08:50:21.500481 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-client\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.500556 master-0 kubenswrapper[27820]: I0320 08:50:21.500501 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.500556 master-0 kubenswrapper[27820]: I0320 08:50:21.500523 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.500556 master-0 kubenswrapper[27820]: I0320 08:50:21.500548 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:50:21.500708 master-0 kubenswrapper[27820]: I0320 08:50:21.500573 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-catalog-content\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:21.500708 master-0 kubenswrapper[27820]: I0320 08:50:21.500600 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.500708 master-0 kubenswrapper[27820]: I0320 08:50:21.500688 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.500802 master-0 kubenswrapper[27820]: I0320 08:50:21.500778 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-config\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:50:21.500834 master-0 kubenswrapper[27820]: I0320 08:50:21.500811 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-catalog-content\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:21.500866 master-0 kubenswrapper[27820]: I0320 08:50:21.500836 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/44bc88d8-9e01-4521-a704-85d9ca095baa-volume-directive-shadow\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.500988 master-0 kubenswrapper[27820]: I0320 08:50:21.500936 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:50:21.501155 master-0 kubenswrapper[27820]: I0320 08:50:21.501125 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.501209 master-0 kubenswrapper[27820]: I0320 08:50:21.501170 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.501209 master-0 kubenswrapper[27820]: I0320 08:50:21.501202 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.501325 master-0 kubenswrapper[27820]: I0320 08:50:21.501223 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:21.501325 master-0 kubenswrapper[27820]: I0320 08:50:21.501242 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w8xs\" (UniqueName: \"kubernetes.io/projected/64d09f81-5fb6-462a-a736-5649779a6b1a-kube-api-access-7w8xs\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:21.501325 master-0 kubenswrapper[27820]: I0320 08:50:21.501274 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.501325 master-0 kubenswrapper[27820]: I0320 08:50:21.501291 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.501325 master-0 kubenswrapper[27820]: I0320 08:50:21.501312 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btwhr\" (UniqueName: \"kubernetes.io/projected/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-kube-api-access-btwhr\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:21.501325 master-0 kubenswrapper[27820]: I0320 08:50:21.501330 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.501511 master-0 kubenswrapper[27820]: I0320 08:50:21.501352 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.501511 master-0 kubenswrapper[27820]: I0320 08:50:21.501371 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-catalog-content\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:21.501511 master-0 kubenswrapper[27820]: I0320 08:50:21.501370 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cert\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.501511 master-0 kubenswrapper[27820]: I0320 08:50:21.501397 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-serving-cert\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:50:21.501643 master-0 kubenswrapper[27820]: I0320 08:50:21.501536 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/57189f7c-5987-457d-a299-0a6b9bcb3e24-image-registry-operator-tls\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:21.501643 master-0 kubenswrapper[27820]: I0320 08:50:21.501572 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-catalog-content\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:21.501742 master-0 kubenswrapper[27820]: I0320 08:50:21.501714 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-client\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.501846 master-0 kubenswrapper[27820]: I0320 08:50:21.501737 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.501886 master-0 kubenswrapper[27820]: I0320 08:50:21.501865 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bxn6\" (UniqueName: \"kubernetes.io/projected/890a6c24-1dbb-4331-952b-5712ac00788e-kube-api-access-7bxn6\") pod \"migrator-8487694857-ltk2p\" (UID: \"890a6c24-1dbb-4331-952b-5712ac00788e\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:50:21.501919 master-0 kubenswrapper[27820]: I0320 08:50:21.501894 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-kube-api-access-tvqv5\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.502063 master-0 kubenswrapper[27820]: I0320 08:50:21.502025 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-cache\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.502102 master-0 kubenswrapper[27820]: I0320 08:50:21.502047 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-apiservice-cert\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.502102 master-0 kubenswrapper[27820]: I0320 08:50:21.502085 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.502163 master-0 kubenswrapper[27820]: I0320 08:50:21.502131 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-sys\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.502195 master-0 kubenswrapper[27820]: I0320 08:50:21.502151 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xj8x6_45b3c788-eb83-448a-bc60-90b8ace28382/kube-multus-additional-cni-plugins/0.log" Mar 20 08:50:21.502195 master-0 kubenswrapper[27820]: I0320 08:50:21.502169 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-stats-auth\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.502252 master-0 kubenswrapper[27820]: I0320 08:50:21.502197 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-cache\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.502252 master-0 kubenswrapper[27820]: I0320 08:50:21.502206 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.502252 master-0 kubenswrapper[27820]: I0320 08:50:21.502214 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:50:21.502362 master-0 kubenswrapper[27820]: I0320 08:50:21.502302 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:21.502362 master-0 kubenswrapper[27820]: I0320 08:50:21.502348 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.502503 master-0 kubenswrapper[27820]: I0320 08:50:21.502384 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:50:21.502503 master-0 kubenswrapper[27820]: I0320 08:50:21.502408 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:50:21.502503 master-0 kubenswrapper[27820]: I0320 08:50:21.502429 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:21.502503 master-0 kubenswrapper[27820]: I0320 08:50:21.502454 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:21.502503 master-0 kubenswrapper[27820]: I0320 08:50:21.502475 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-br4bc\" (UniqueName: \"kubernetes.io/projected/6a6a187d-5b25-4d63-939e-c04e07369371-kube-api-access-br4bc\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.502503 master-0 kubenswrapper[27820]: I0320 08:50:21.502495 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08d9196b-b68f-421b-8754-bfbaa4020a97-cache\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.502673 master-0 kubenswrapper[27820]: I0320 08:50:21.502518 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:50:21.502673 master-0 kubenswrapper[27820]: I0320 08:50:21.502540 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:50:21.502733 master-0 kubenswrapper[27820]: I0320 08:50:21.502708 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-operator-metrics\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:21.503132 master-0 kubenswrapper[27820]: I0320 08:50:21.502561 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:50:21.503184 master-0 kubenswrapper[27820]: I0320 08:50:21.503142 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.503184 master-0 kubenswrapper[27820]: I0320 08:50:21.503163 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.503329 master-0 kubenswrapper[27820]: I0320 08:50:21.503289 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-serving-cert\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:50:21.503438 master-0 kubenswrapper[27820]: I0320 08:50:21.503401 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-cabundle\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:50:21.503491 master-0 kubenswrapper[27820]: I0320 08:50:21.503442 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.503491 master-0 kubenswrapper[27820]: I0320 08:50:21.503462 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-catalog-content\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:21.503491 master-0 kubenswrapper[27820]: I0320 08:50:21.503484 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:50:21.503600 master-0 kubenswrapper[27820]: I0320 08:50:21.503505 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-node-pullsecrets\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.503600 master-0 kubenswrapper[27820]: I0320 08:50:21.503527 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:21.503600 master-0 kubenswrapper[27820]: I0320 08:50:21.503566 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.503690 master-0 kubenswrapper[27820]: I0320 08:50:21.503621 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-config\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:50:21.503690 master-0 kubenswrapper[27820]: I0320 08:50:21.503588 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9mbs\" (UniqueName: \"kubernetes.io/projected/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-kube-api-access-n9mbs\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:21.503755 master-0 kubenswrapper[27820]: I0320 08:50:21.503738 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-tuned\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.503901 master-0 kubenswrapper[27820]: I0320 08:50:21.503870 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/08d9196b-b68f-421b-8754-bfbaa4020a97-cache\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.503939 master-0 kubenswrapper[27820]: I0320 08:50:21.503916 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-catalog-content\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:21.503988 master-0 kubenswrapper[27820]: I0320 08:50:21.503963 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:50:21.504023 master-0 kubenswrapper[27820]: I0320 08:50:21.503992 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/71ca96e8-5108-455c-bb3c-17977d38e912-serving-cert\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:50:21.504053 master-0 kubenswrapper[27820]: I0320 08:50:21.504039 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.504335 master-0 kubenswrapper[27820]: I0320 08:50:21.504305 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/5707066a-bd66-41bc-8cea-cff1630ab5ee-telemetry-config\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:50:21.504454 master-0 kubenswrapper[27820]: I0320 08:50:21.504384 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-kubernetes\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.504454 master-0 kubenswrapper[27820]: I0320 08:50:21.504417 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-host\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.504550 master-0 kubenswrapper[27820]: I0320 08:50:21.504508 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.504621 master-0 kubenswrapper[27820]: I0320 08:50:21.504563 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.504621 master-0 kubenswrapper[27820]: I0320 08:50:21.504577 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-tuned\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.504621 master-0 kubenswrapper[27820]: I0320 08:50:21.504588 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-client\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.504621 master-0 kubenswrapper[27820]: I0320 08:50:21.504612 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:21.504745 master-0 kubenswrapper[27820]: I0320 08:50:21.504664 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 08:50:21.504817 master-0 kubenswrapper[27820]: I0320 08:50:21.504795 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-ovnkube-identity-cm\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.505914 master-0 kubenswrapper[27820]: I0320 08:50:21.505804 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.505960 master-0 kubenswrapper[27820]: I0320 08:50:21.505922 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:21.506083 master-0 kubenswrapper[27820]: I0320 08:50:21.506047 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.506138 master-0 kubenswrapper[27820]: I0320 08:50:21.506082 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.506186 master-0 kubenswrapper[27820]: I0320 08:50:21.506133 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-config\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.506186 master-0 kubenswrapper[27820]: I0320 08:50:21.506143 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-run\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.506281 master-0 kubenswrapper[27820]: I0320 08:50:21.506208 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6163bd4b-dc83-4e83-8590-5ac4753bda1c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.506281 master-0 kubenswrapper[27820]: I0320 08:50:21.506239 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:50:21.506356 master-0 kubenswrapper[27820]: I0320 08:50:21.506316 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.506498 master-0 kubenswrapper[27820]: I0320 08:50:21.506460 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:21.506533 master-0 kubenswrapper[27820]: I0320 08:50:21.506513 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-config\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.506565 master-0 kubenswrapper[27820]: I0320 08:50:21.506514 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.506594 master-0 kubenswrapper[27820]: I0320 08:50:21.506564 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:21.506634 master-0 kubenswrapper[27820]: I0320 08:50:21.506613 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.506668 master-0 kubenswrapper[27820]: I0320 08:50:21.506645 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.506697 master-0 kubenswrapper[27820]: I0320 08:50:21.506675 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.506734 master-0 kubenswrapper[27820]: I0320 08:50:21.506702 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:50:21.506734 master-0 kubenswrapper[27820]: I0320 08:50:21.506728 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-metrics-certs\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.506810 master-0 kubenswrapper[27820]: I0320 08:50:21.506754 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.506810 master-0 kubenswrapper[27820]: I0320 08:50:21.506783 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvxjl\" (UniqueName: \"kubernetes.io/projected/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-kube-api-access-fvxjl\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:50:21.506868 master-0 kubenswrapper[27820]: I0320 08:50:21.506811 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-serving-ca\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.506868 master-0 kubenswrapper[27820]: I0320 08:50:21.506840 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-trusted-ca-bundle\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.506924 master-0 kubenswrapper[27820]: I0320 08:50:21.506865 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.506924 master-0 kubenswrapper[27820]: I0320 08:50:21.506891 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-rootfs\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:21.506924 master-0 kubenswrapper[27820]: I0320 08:50:21.506917 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.507009 master-0 kubenswrapper[27820]: I0320 08:50:21.506944 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdqzn\" (UniqueName: \"kubernetes.io/projected/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-api-access-hdqzn\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.507009 master-0 kubenswrapper[27820]: I0320 08:50:21.506971 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:50:21.507009 master-0 kubenswrapper[27820]: I0320 08:50:21.506995 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.507090 master-0 kubenswrapper[27820]: I0320 08:50:21.507019 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.507090 master-0 kubenswrapper[27820]: I0320 08:50:21.507031 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-srv-cert\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:50:21.507090 master-0 kubenswrapper[27820]: I0320 08:50:21.507043 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.507090 master-0 kubenswrapper[27820]: I0320 08:50:21.507082 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:21.507286 master-0 kubenswrapper[27820]: I0320 08:50:21.507244 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-service-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.507476 master-0 kubenswrapper[27820]: I0320 08:50:21.507448 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-env-overrides\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.507513 master-0 kubenswrapper[27820]: I0320 08:50:21.507490 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.507544 master-0 kubenswrapper[27820]: I0320 08:50:21.507519 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.507574 master-0 kubenswrapper[27820]: I0320 08:50:21.507563 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:21.507606 master-0 kubenswrapper[27820]: I0320 08:50:21.507591 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:21.507644 master-0 kubenswrapper[27820]: I0320 08:50:21.507614 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.507689 master-0 kubenswrapper[27820]: I0320 08:50:21.507640 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:50:21.507689 master-0 kubenswrapper[27820]: I0320 08:50:21.507666 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.507752 master-0 kubenswrapper[27820]: I0320 08:50:21.507693 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.507752 master-0 kubenswrapper[27820]: I0320 08:50:21.507723 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmssd\" (UniqueName: \"kubernetes.io/projected/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-kube-api-access-zmssd\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:21.507752 master-0 kubenswrapper[27820]: I0320 08:50:21.507747 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-encryption-config\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.507863 master-0 kubenswrapper[27820]: I0320 08:50:21.507771 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x82xz\" (UniqueName: \"kubernetes.io/projected/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-kube-api-access-x82xz\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:50:21.507863 master-0 kubenswrapper[27820]: I0320 08:50:21.507794 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.507863 master-0 kubenswrapper[27820]: I0320 08:50:21.507818 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:21.507863 master-0 kubenswrapper[27820]: I0320 08:50:21.507849 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbnx\" (UniqueName: \"kubernetes.io/projected/56970553-2ac8-4cb5-a12a-b7c1e777c587-kube-api-access-zrbnx\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:50:21.507986 master-0 kubenswrapper[27820]: I0320 08:50:21.507871 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:50:21.507986 master-0 kubenswrapper[27820]: I0320 08:50:21.507896 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.507986 master-0 kubenswrapper[27820]: I0320 08:50:21.507918 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.507986 master-0 kubenswrapper[27820]: I0320 08:50:21.507944 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lnpz\" (UniqueName: \"kubernetes.io/projected/0ad95adc-2e0f-4e95-94e7-66e6d240a930-kube-api-access-5lnpz\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:21.507986 master-0 kubenswrapper[27820]: I0320 08:50:21.507972 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlgd7\" (UniqueName: \"kubernetes.io/projected/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-kube-api-access-hlgd7\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:21.508124 master-0 kubenswrapper[27820]: I0320 08:50:21.507996 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:50:21.508124 master-0 kubenswrapper[27820]: I0320 08:50:21.508033 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:50:21.508357 master-0 kubenswrapper[27820]: I0320 08:50:21.508336 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-serving-cert\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.508003 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23003a2f-2053-47cc-8133-23eb886d4da0-marketplace-trusted-ca\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.509983 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-config\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.510459 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-wtmp\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.510485 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/5707066a-bd66-41bc-8cea-cff1630ab5ee-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.510523 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.510645 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmb9v\" (UniqueName: \"kubernetes.io/projected/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-kube-api-access-hmb9v\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.510759 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.510950 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511001 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511170 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511219 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511301 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511681 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511741 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511805 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r22fm\" (UniqueName: \"kubernetes.io/projected/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-kube-api-access-r22fm\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511903 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-systemd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.511975 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512021 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-utilities\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512196 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-serving-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512238 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512302 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512349 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512388 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-textfile\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512430 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncztx\" (UniqueName: \"kubernetes.io/projected/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047-kube-api-access-ncztx\") pod \"network-check-source-b4bf74f6-nnjv9\" (UID: \"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512465 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512510 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512569 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512614 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512647 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512693 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512738 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512782 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512851 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512889 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-utilities\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512939 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89571b2-098c-495b-9b53-c4ebd95296ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.512983 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513026 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513066 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513109 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513153 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmph\" (UniqueName: \"kubernetes.io/projected/581a8be2-d16c-4fd8-b051-214bd60a2a91-kube-api-access-ssmph\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513197 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns97v\" (UniqueName: \"kubernetes.io/projected/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-kube-api-access-ns97v\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513239 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6a187d-5b25-4d63-939e-c04e07369371-audit-dir\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513309 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513359 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513406 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513448 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmk45\" (UniqueName: \"kubernetes.io/projected/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-kube-api-access-mmk45\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513483 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513527 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513620 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513698 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513788 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513867 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.513954 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqgkl\" (UniqueName: \"kubernetes.io/projected/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9-kube-api-access-rqgkl\") pod \"csi-snapshot-controller-64854d9cff-gng67\" (UID: \"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514026 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514071 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514114 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514157 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-tmp\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514202 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514245 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514368 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-config-volume\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514408 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514453 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514496 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-default-certificate\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514539 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514576 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514620 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514666 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514706 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514754 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514792 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514832 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit-dir\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514874 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514911 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.514947 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bca4cc7c-839d-4877-b0aa-c07607fea404-service-ca\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515022 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-utilities\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515107 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515157 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515230 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515330 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-apiservice-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515425 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bca4cc7c-839d-4877-b0aa-c07607fea404-kube-api-access\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515507 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-utilities\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515586 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-image-import-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515628 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-utilities\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515663 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515793 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dkqm\" (UniqueName: \"kubernetes.io/projected/1746482a-d1a3-4eac-8bc9-643b6af75163-kube-api-access-2dkqm\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515901 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515982 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/f202273a-b111-46ce-b404-7e481d2c7ff9-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.518305 master-0 kubenswrapper[27820]: I0320 08:50:21.515986 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.518494 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.519001 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/e9425526-9f51-4302-a19d-a8107f56c582-operand-assets\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.519381 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6d26f719-43b9-4c1c-9a54-ff800177db68-trusted-ca\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.519427 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f6c819a-5074-4d29-84c8-e187528ad757-utilities\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.519790 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-binary-copy\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.520147 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cni-binary-copy\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.520777 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-metrics-tls\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.520864 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9425526-9f51-4302-a19d-a8107f56c582-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.520916 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d09f81-5fb6-462a-a736-5649779a6b1a-utilities\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: E0320 08:50:21.521055 27820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.521202 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/22ff82cf-0d7d-4955-9b7c-97757acbc021-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.521652 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-utilities\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.521693 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-env-overrides\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.521863 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-tmp\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.522433 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d26f719-43b9-4c1c-9a54-ff800177db68-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.522519 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-config\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.522756 master-0 kubenswrapper[27820]: I0320 08:50:21.522662 27820 scope.go:117] "RemoveContainer" containerID="11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe" Mar 20 08:50:21.523482 master-0 kubenswrapper[27820]: I0320 08:50:21.522806 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-textfile\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.523482 master-0 kubenswrapper[27820]: I0320 08:50:21.523085 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-daemon-config\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.523482 master-0 kubenswrapper[27820]: E0320 08:50:21.523228 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe\": container with ID starting with 11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe not found: ID does not exist" containerID="11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe" Mar 20 08:50:21.523682 master-0 kubenswrapper[27820]: I0320 08:50:21.523296 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe"} err="failed to get container status \"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe\": rpc error: code = NotFound desc = could not find container \"11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe\": container with ID starting with 11cabf5eb98c82a93ca52ddd09d57840189a4d526c1e14f354cfdcf56ad250fe not found: ID does not exist" Mar 20 08:50:21.524302 master-0 kubenswrapper[27820]: I0320 08:50:21.523766 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526020 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526105 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526131 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526176 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5wnd\" (UniqueName: \"kubernetes.io/projected/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-kube-api-access-w5wnd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526205 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526238 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526279 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526311 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/14ef046f-b284-457f-ad7a-b7958cb82dd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-kh8bg\" (UID: \"14ef046f-b284-457f-ad7a-b7958cb82dd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526351 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526388 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526420 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526441 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526468 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-conf\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526495 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526519 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526545 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526571 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526599 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.526631 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527132 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527242 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527318 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-root\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527385 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527432 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527452 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-trusted-ca-bundle\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527483 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527502 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fec3170d-3f3e-42f5-b20a-da53721c0dac-serving-cert\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527530 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527583 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527637 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-encryption-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.528219 master-0 kubenswrapper[27820]: I0320 08:50:21.527994 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528528 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/0e79950f-50a5-46ec-b836-7a35dcce2851-package-server-manager-serving-cert\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528574 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-modprobe-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528598 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528627 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528694 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-audit-policies\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528722 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm9l9\" (UniqueName: \"kubernetes.io/projected/4f6c819a-5074-4d29-84c8-e187528ad757-kube-api-access-mm9l9\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528745 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528770 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgffp\" (UniqueName: \"kubernetes.io/projected/80ddf0a4-e853-4de0-b540-81144dfdd31d-kube-api-access-pgffp\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528795 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528925 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.528952 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-lib-modules\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529314 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca4cc7c-839d-4877-b0aa-c07607fea404-serving-cert\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529349 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529376 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529411 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529445 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529468 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529502 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529524 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-trusted-ca-bundle\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529546 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vm9c\" (UniqueName: \"kubernetes.io/projected/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-kube-api-access-4vm9c\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529569 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529589 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-snapshots\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:21.529962 master-0 kubenswrapper[27820]: I0320 08:50:21.529614 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.530091 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.530623 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/f202273a-b111-46ce-b404-7e481d2c7ff9-images\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.530948 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/210dd7f0-d1c0-407a-b89b-f11ef605e5df-ovnkube-config\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.531118 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.531720 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/22f85e98-eb36-46b2-ab5d-7c21e060cba5-trusted-ca\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.531986 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-srv-cert\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.532254 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-snapshots\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.532560 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/22f85e98-eb36-46b2-ab5d-7c21e060cba5-metrics-tls\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:21.532705 master-0 kubenswrapper[27820]: I0320 08:50:21.532578 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/57189f7c-5987-457d-a299-0a6b9bcb3e24-trusted-ca\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.532746 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-serving-cert\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.532840 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/3065e4b4-4493-41ce-b9d2-89315475f74f-available-featuregates\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.533048 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtt44\" (UniqueName: \"kubernetes.io/projected/123f1ecb-cc03-462b-b76f-7251bf69d3d6-kube-api-access-dtt44\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.533295 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.533364 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-sys\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.533878 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-hosts-file\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.533971 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.534050 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.534160 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.534325 master-0 kubenswrapper[27820]: I0320 08:50:21.534244 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.534403 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.534461 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.534483 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-serving-cert\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.534508 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.534857 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-config\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.535427 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-serving-cert\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.535584 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.535761 master-0 kubenswrapper[27820]: I0320 08:50:21.535672 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ff2dfe9d-2834-43cb-b093-0831b2b87131-metrics-tls\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:50:21.536019 master-0 kubenswrapper[27820]: I0320 08:50:21.535897 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20ff930f-ec0d-40ed-a879-1546691f685d-serving-cert\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:50:21.536019 master-0 kubenswrapper[27820]: I0320 08:50:21.535904 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:50:21.536438 master-0 kubenswrapper[27820]: I0320 08:50:21.536413 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6d987-4b59-4fd9-889a-3250c12a726c-tmpfs\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:21.536503 master-0 kubenswrapper[27820]: I0320 08:50:21.536454 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.538346 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.538655 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-config\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.538849 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-config\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.538931 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/0cb6d987-4b59-4fd9-889a-3250c12a726c-tmpfs\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.538400 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysconfig\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.538998 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.539031 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.539550 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.539595 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.539628 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.539667 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.540163 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:50:21.540620 master-0 kubenswrapper[27820]: I0320 08:50:21.540344 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/fec3170d-3f3e-42f5-b20a-da53721c0dac-etcd-service-ca\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:21.542986 master-0 kubenswrapper[27820]: I0320 08:50:21.540779 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:50:21.542986 master-0 kubenswrapper[27820]: I0320 08:50:21.540618 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71ca96e8-5108-455c-bb3c-17977d38e912-config\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:50:21.546942 master-0 kubenswrapper[27820]: I0320 08:50:21.541460 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-webhook-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:21.546942 master-0 kubenswrapper[27820]: I0320 08:50:21.541574 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20ff930f-ec0d-40ed-a879-1546691f685d-config\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:50:21.546942 master-0 kubenswrapper[27820]: I0320 08:50:21.546898 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v29ws\" (UniqueName: \"kubernetes.io/projected/0cb6d987-4b59-4fd9-889a-3250c12a726c-kube-api-access-v29ws\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:21.546942 master-0 kubenswrapper[27820]: I0320 08:50:21.546942 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.547172 master-0 kubenswrapper[27820]: I0320 08:50:21.546974 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:50:21.547172 master-0 kubenswrapper[27820]: I0320 08:50:21.547003 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:21.547172 master-0 kubenswrapper[27820]: I0320 08:50:21.547030 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:21.547172 master-0 kubenswrapper[27820]: I0320 08:50:21.547058 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-var-lib-kubelet\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.547172 master-0 kubenswrapper[27820]: I0320 08:50:21.547086 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:21.547172 master-0 kubenswrapper[27820]: I0320 08:50:21.547094 27820 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.547853 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3065e4b4-4493-41ce-b9d2-89315475f74f-serving-cert\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.547112 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbtnq\" (UniqueName: \"kubernetes.io/projected/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-kube-api-access-dbtnq\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.547933 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.547954 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.547988 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxqp4\" (UniqueName: \"kubernetes.io/projected/ca56e37d-80ea-432b-a6d9-f4e904a40e10-kube-api-access-jxqp4\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.548011 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.548176 master-0 kubenswrapper[27820]: I0320 08:50:21.548034 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbzl9\" (UniqueName: \"kubernetes.io/projected/6163bd4b-dc83-4e83-8590-5ac4753bda1c-kube-api-access-zbzl9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.548439 master-0 kubenswrapper[27820]: I0320 08:50:21.548219 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:21.548439 master-0 kubenswrapper[27820]: I0320 08:50:21.548280 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.548439 master-0 kubenswrapper[27820]: I0320 08:50:21.548309 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:21.548439 master-0 kubenswrapper[27820]: I0320 08:50:21.548331 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:50:21.548439 master-0 kubenswrapper[27820]: I0320 08:50:21.548356 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pw6sv\" (UniqueName: \"kubernetes.io/projected/e89571b2-098c-495b-9b53-c4ebd95296ab-kube-api-access-pw6sv\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.548439 master-0 kubenswrapper[27820]: I0320 08:50:21.548376 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:50:21.548727 master-0 kubenswrapper[27820]: I0320 08:50:21.548687 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/00350ac7-b40a-4459-b94c-a37d7b613645-metrics-certs\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:50:21.549938 master-0 kubenswrapper[27820]: I0320 08:50:21.549902 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 08:50:21.550248 master-0 kubenswrapper[27820]: I0320 08:50:21.550169 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9d653bfa-7168-49fa-a838-aedb33c7e60f-env-overrides\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.565970 master-0 kubenswrapper[27820]: I0320 08:50:21.565586 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 08:50:21.568431 master-0 kubenswrapper[27820]: I0320 08:50:21.568396 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9d653bfa-7168-49fa-a838-aedb33c7e60f-webhook-cert\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:21.570507 master-0 kubenswrapper[27820]: I0320 08:50:21.570389 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-xj8x6_45b3c788-eb83-448a-bc60-90b8ace28382/kube-multus-additional-cni-plugins/0.log" Mar 20 08:50:21.570586 master-0 kubenswrapper[27820]: I0320 08:50:21.570488 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" event={"ID":"45b3c788-eb83-448a-bc60-90b8ace28382","Type":"ContainerDied","Data":"86c6cd594ea2c7db973b52489f7bf76530d2045045df7dd60fb29d21f2a61ca6"} Mar 20 08:50:21.570586 master-0 kubenswrapper[27820]: I0320 08:50:21.570565 27820 scope.go:117] "RemoveContainer" containerID="4bc3fd775c8ec67b9c69f57ed6fb56798f5748c1c5c893e65ef38d518581c15e" Mar 20 08:50:21.570777 master-0 kubenswrapper[27820]: I0320 08:50:21.570757 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-xj8x6" Mar 20 08:50:21.572952 master-0 kubenswrapper[27820]: I0320 08:50:21.572923 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.573027 master-0 kubenswrapper[27820]: I0320 08:50:21.572990 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.586833 master-0 kubenswrapper[27820]: I0320 08:50:21.586790 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 08:50:21.592007 master-0 kubenswrapper[27820]: I0320 08:50:21.587583 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.599390 master-0 kubenswrapper[27820]: I0320 08:50:21.597057 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/b097596e-79e1-44d1-be8a-96340042a041-iptables-alerter-script\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:21.607597 master-0 kubenswrapper[27820]: I0320 08:50:21.607175 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 08:50:21.608593 master-0 kubenswrapper[27820]: I0320 08:50:21.608467 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.614136 master-0 kubenswrapper[27820]: I0320 08:50:21.614065 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovn-node-metrics-cert\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.628830 master-0 kubenswrapper[27820]: I0320 08:50:21.628752 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 08:50:21.632822 master-0 kubenswrapper[27820]: I0320 08:50:21.632701 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-ovnkube-script-lib\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.646189 master-0 kubenswrapper[27820]: I0320 08:50:21.646109 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 08:50:21.648764 master-0 kubenswrapper[27820]: I0320 08:50:21.648702 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pcbj\" (UniqueName: \"kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj\") pod \"45b3c788-eb83-448a-bc60-90b8ace28382\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " Mar 20 08:50:21.648875 master-0 kubenswrapper[27820]: I0320 08:50:21.648847 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist\") pod \"45b3c788-eb83-448a-bc60-90b8ace28382\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " Mar 20 08:50:21.648909 master-0 kubenswrapper[27820]: I0320 08:50:21.648884 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready\") pod \"45b3c788-eb83-448a-bc60-90b8ace28382\" (UID: \"45b3c788-eb83-448a-bc60-90b8ace28382\") " Mar 20 08:50:21.649667 master-0 kubenswrapper[27820]: I0320 08:50:21.649635 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.649746 master-0 kubenswrapper[27820]: I0320 08:50:21.649672 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.649746 master-0 kubenswrapper[27820]: I0320 08:50:21.649703 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.649746 master-0 kubenswrapper[27820]: I0320 08:50:21.649689 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready" (OuterVolumeSpecName: "ready") pod "45b3c788-eb83-448a-bc60-90b8ace28382" (UID: "45b3c788-eb83-448a-bc60-90b8ace28382"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:21.649849 master-0 kubenswrapper[27820]: I0320 08:50:21.649735 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.649849 master-0 kubenswrapper[27820]: I0320 08:50:21.649793 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-ovn\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.649849 master-0 kubenswrapper[27820]: I0320 08:50:21.649778 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-cnibin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.649849 master-0 kubenswrapper[27820]: I0320 08:50:21.649840 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.650071 master-0 kubenswrapper[27820]: I0320 08:50:21.649910 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.650071 master-0 kubenswrapper[27820]: I0320 08:50:21.649927 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.650071 master-0 kubenswrapper[27820]: I0320 08:50:21.649992 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:21.650177 master-0 kubenswrapper[27820]: I0320 08:50:21.650084 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-log-socket\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.650177 master-0 kubenswrapper[27820]: I0320 08:50:21.650160 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.650306 master-0 kubenswrapper[27820]: I0320 08:50:21.650237 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.650360 master-0 kubenswrapper[27820]: I0320 08:50:21.650313 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.650360 master-0 kubenswrapper[27820]: I0320 08:50:21.650342 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-conf\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650370 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650399 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650493 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-conf\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650348 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-socket-dir-parent\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650490 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650608 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650630 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-containers\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650682 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-root\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650715 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-kubelet\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650722 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-root\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650797 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.650933 master-0 kubenswrapper[27820]: I0320 08:50:21.650868 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-modprobe-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.650972 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651049 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-lib-modules\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651117 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651256 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651301 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-hosts-file\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651350 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651374 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651408 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651429 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-sys\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651456 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zf6h\" (UniqueName: \"kubernetes.io/projected/a88b1c81-02b5-4c85-9660-5f84c900a946-kube-api-access-5zf6h\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651510 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651536 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651578 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysconfig\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651667 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651684 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-sys\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651721 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/b097596e-79e1-44d1-be8a-96340042a041-host-slash\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651755 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651799 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651820 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651855 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-node-log\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651849 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "45b3c788-eb83-448a-bc60-90b8ace28382" (UID: "45b3c788-eb83-448a-bc60-90b8ace28382"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651915 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysconfig\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651937 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-modprobe-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651953 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/08d9196b-b68f-421b-8754-bfbaa4020a97-etc-docker\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651952 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-static-pod-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.651871 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652004 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652008 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-cert-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652087 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-lib-modules\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652090 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-hosts-file\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652146 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-kubelet\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652218 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-var-lib-kubelet\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652282 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-var-lib-kubelet\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652218 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-systemd-units\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652312 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652329 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-run-netns\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652342 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652399 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652422 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652434 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-hostroot\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652448 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652478 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652508 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652535 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652542 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-containers\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652578 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652633 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652823 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652850 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.652982 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653008 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653052 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653084 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653108 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653131 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653165 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653208 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-system-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653249 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-usr-local-bin\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653257 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653297 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653330 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8413125cf444e5c95f023c5dd9c6151e-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"8413125cf444e5c95f023c5dd9c6151e\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653376 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653385 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-sys\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653408 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-sys\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653333 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653479 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653505 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653528 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653556 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-node-pullsecrets\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653582 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-cni-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653630 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-node-pullsecrets\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653659 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-kubernetes\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653683 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-etc-kubernetes\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653684 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-host\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653710 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-host\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653711 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653727 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-k8s-cni-cncf-io\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653746 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653761 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-kubernetes\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653779 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653792 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-netns\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653814 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653836 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-host-etc-kube\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653854 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653886 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-run-multus-certs\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653900 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-run\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653917 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653926 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653951 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-run\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653952 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6163bd4b-dc83-4e83-8590-5ac4753bda1c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654004 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654034 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654023 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/6163bd4b-dc83-4e83-8590-5ac4753bda1c-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654058 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654084 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-multus\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654090 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654109 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654139 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654156 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654166 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-host-var-lib-cni-bin\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654193 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-multus-conf-dir\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654204 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654223 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-rootfs\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654242 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654253 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654285 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-data-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654308 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-rootfs\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654318 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.653661 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj" (OuterVolumeSpecName: "kube-api-access-7pcbj") pod "45b3c788-eb83-448a-bc60-90b8ace28382" (UID: "45b3c788-eb83-448a-bc60-90b8ace28382"). InnerVolumeSpecName "kube-api-access-7pcbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654344 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654408 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-system-cni-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654432 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654467 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654476 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-run-systemd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654543 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654576 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654607 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654637 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-wtmp\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654666 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654708 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654760 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654784 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654808 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654831 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-systemd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654862 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654890 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654915 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.654957 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655000 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655023 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655057 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655080 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655138 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655178 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6a187d-5b25-4d63-939e-c04e07369371-audit-dir\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655202 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655282 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655316 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.657771 master-0 kubenswrapper[27820]: I0320 08:50:21.655350 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655429 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfgfz\" (UniqueName: \"kubernetes.io/projected/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-kube-api-access-tfgfz\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655460 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655484 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655508 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655533 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit-dir\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655559 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655603 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-cnibin\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655627 27820 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/45b3c788-eb83-448a-bc60-90b8ace28382-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655649 27820 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/45b3c788-eb83-448a-bc60-90b8ace28382-ready\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655669 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pcbj\" (UniqueName: \"kubernetes.io/projected/45b3c788-eb83-448a-bc60-90b8ace28382-kube-api-access-7pcbj\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655630 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-slash\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655715 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-wtmp\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655721 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655744 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-netd\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655771 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/1249822f86f23526277d165c0d5d3c19-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"1249822f86f23526277d165c0d5d3c19\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655806 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-os-release\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655837 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-var-lib-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655847 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-systemd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655861 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-etc-openvswitch\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655885 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-resource-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655922 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-etc-sysctl-d\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655928 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-os-release\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656000 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.655998 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/094204df314fe45bd5af12ca1b4622bb-log-dir\") pod \"etcd-master-0\" (UID: \"094204df314fe45bd5af12ca1b4622bb\") " pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656019 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-etc-docker\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656037 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6a6a187d-5b25-4d63-939e-c04e07369371-audit-dir\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656073 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit-dir\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656111 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-host-cni-bin\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656116 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bca4cc7c-839d-4877-b0aa-c07607fea404-etc-ssl-certs\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656143 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.656149 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/22ff82cf-0d7d-4955-9b7c-97757acbc021-tuning-conf-dir\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:21.667434 master-0 kubenswrapper[27820]: I0320 08:50:21.665587 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 08:50:21.685780 master-0 kubenswrapper[27820]: I0320 08:50:21.685737 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 08:50:21.706760 master-0 kubenswrapper[27820]: I0320 08:50:21.706708 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 08:50:21.712055 master-0 kubenswrapper[27820]: I0320 08:50:21.712012 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-audit\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:21.726153 master-0 kubenswrapper[27820]: I0320 08:50:21.726083 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 08:50:21.745022 master-0 kubenswrapper[27820]: I0320 08:50:21.744977 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 08:50:21.749985 master-0 kubenswrapper[27820]: I0320 08:50:21.749937 27820 scope.go:117] "RemoveContainer" containerID="4af0d14e2080acfab9b4be1c21f5c397bb2b57510a3ab1d14b3ae883125de902" Mar 20 08:50:21.750660 master-0 kubenswrapper[27820]: I0320 08:50:21.750435 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.750660 master-0 kubenswrapper[27820]: I0320 08:50:21.750525 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.750660 master-0 kubenswrapper[27820]: I0320 08:50:21.750537 27820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.750660 master-0 kubenswrapper[27820]: I0320 08:50:21.750548 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.751168 master-0 kubenswrapper[27820]: I0320 08:50:21.750988 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:21.752286 master-0 kubenswrapper[27820]: I0320 08:50:21.752235 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-vhrdf_74bebf0b-6727-4959-8239-a9389e630524/multus-admission-controller/0.log" Mar 20 08:50:21.752371 master-0 kubenswrapper[27820]: I0320 08:50:21.752335 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:50:21.756891 master-0 kubenswrapper[27820]: I0320 08:50:21.756850 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.757130 master-0 kubenswrapper[27820]: I0320 08:50:21.757069 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.757246 master-0 kubenswrapper[27820]: I0320 08:50:21.757175 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.757246 master-0 kubenswrapper[27820]: I0320 08:50:21.757232 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.757897 master-0 kubenswrapper[27820]: I0320 08:50:21.757280 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.757897 master-0 kubenswrapper[27820]: I0320 08:50:21.757305 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.759208 master-0 kubenswrapper[27820]: I0320 08:50:21.757890 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.759208 master-0 kubenswrapper[27820]: I0320 08:50:21.757951 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.759208 master-0 kubenswrapper[27820]: I0320 08:50:21.758168 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:21.759208 master-0 kubenswrapper[27820]: I0320 08:50:21.758411 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:21.759208 master-0 kubenswrapper[27820]: I0320 08:50:21.759154 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:21.765228 master-0 kubenswrapper[27820]: I0320 08:50:21.765180 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 08:50:21.770961 master-0 kubenswrapper[27820]: I0320 08:50:21.770902 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-key\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:50:21.798631 master-0 kubenswrapper[27820]: I0320 08:50:21.787052 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 08:50:21.799645 master-0 kubenswrapper[27820]: I0320 08:50:21.799580 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1746482a-d1a3-4eac-8bc9-643b6af75163-signing-cabundle\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:50:21.813359 master-0 kubenswrapper[27820]: I0320 08:50:21.813307 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 08:50:21.822539 master-0 kubenswrapper[27820]: I0320 08:50:21.822163 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.822848 master-0 kubenswrapper[27820]: I0320 08:50:21.822756 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.824895 master-0 kubenswrapper[27820]: I0320 08:50:21.824851 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 08:50:21.875126 master-0 kubenswrapper[27820]: I0320 08:50:21.874867 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") pod \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " Mar 20 08:50:21.875126 master-0 kubenswrapper[27820]: I0320 08:50:21.874946 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") pod \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " Mar 20 08:50:21.875126 master-0 kubenswrapper[27820]: I0320 08:50:21.874992 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") pod \"9775cc27-53b9-4d21-a98b-84b39ada32ee\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " Mar 20 08:50:21.875126 master-0 kubenswrapper[27820]: I0320 08:50:21.875040 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") pod \"9775cc27-53b9-4d21-a98b-84b39ada32ee\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " Mar 20 08:50:21.875616 master-0 kubenswrapper[27820]: I0320 08:50:21.875502 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9775cc27-53b9-4d21-a98b-84b39ada32ee" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:21.875616 master-0 kubenswrapper[27820]: I0320 08:50:21.875567 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "75cef5aa-93e6-4b8b-9ab1-06809e85883a" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:21.875616 master-0 kubenswrapper[27820]: I0320 08:50:21.875575 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock" (OuterVolumeSpecName: "var-lock") pod "9775cc27-53b9-4d21-a98b-84b39ada32ee" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:21.875764 master-0 kubenswrapper[27820]: I0320 08:50:21.875501 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock" (OuterVolumeSpecName: "var-lock") pod "75cef5aa-93e6-4b8b-9ab1-06809e85883a" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:21.878669 master-0 kubenswrapper[27820]: I0320 08:50:21.877819 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.878669 master-0 kubenswrapper[27820]: I0320 08:50:21.877858 27820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.878669 master-0 kubenswrapper[27820]: I0320 08:50:21.877873 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/75cef5aa-93e6-4b8b-9ab1-06809e85883a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.878669 master-0 kubenswrapper[27820]: I0320 08:50:21.877889 27820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9775cc27-53b9-4d21-a98b-84b39ada32ee-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:21.879872 master-0 kubenswrapper[27820]: I0320 08:50:21.879829 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 08:50:21.880216 master-0 kubenswrapper[27820]: I0320 08:50:21.880184 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 08:50:21.880701 master-0 kubenswrapper[27820]: I0320 08:50:21.880590 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-config-volume\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:21.882768 master-0 kubenswrapper[27820]: I0320 08:50:21.882498 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 20 08:50:21.885326 master-0 kubenswrapper[27820]: I0320 08:50:21.885176 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 08:50:21.887878 master-0 kubenswrapper[27820]: I0320 08:50:21.887835 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-metrics-tls\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:21.907217 master-0 kubenswrapper[27820]: I0320 08:50:21.907142 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 08:50:21.915800 master-0 kubenswrapper[27820]: I0320 08:50:21.915742 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-default-certificate\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.926007 master-0 kubenswrapper[27820]: I0320 08:50:21.925710 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 08:50:21.933870 master-0 kubenswrapper[27820]: I0320 08:50:21.933832 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-stats-auth\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.947727 master-0 kubenswrapper[27820]: I0320 08:50:21.946798 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 08:50:21.948712 master-0 kubenswrapper[27820]: I0320 08:50:21.948654 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e89571b2-098c-495b-9b53-c4ebd95296ab-metrics-certs\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.965499 master-0 kubenswrapper[27820]: I0320 08:50:21.965448 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 08:50:21.974427 master-0 kubenswrapper[27820]: I0320 08:50:21.970847 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e89571b2-098c-495b-9b53-c4ebd95296ab-service-ca-bundle\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:21.984925 master-0 kubenswrapper[27820]: I0320 08:50:21.984884 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 08:50:22.007236 master-0 kubenswrapper[27820]: I0320 08:50:22.007185 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 20 08:50:22.030297 master-0 kubenswrapper[27820]: I0320 08:50:22.024868 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 08:50:22.030297 master-0 kubenswrapper[27820]: I0320 08:50:22.028658 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-encryption-config\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:22.061228 master-0 kubenswrapper[27820]: I0320 08:50:22.061147 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 20 08:50:22.065542 master-0 kubenswrapper[27820]: I0320 08:50:22.065499 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 08:50:22.066689 master-0 kubenswrapper[27820]: I0320 08:50:22.066648 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-client\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.085594 master-0 kubenswrapper[27820]: I0320 08:50:22.085291 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 20 08:50:22.085928 master-0 kubenswrapper[27820]: I0320 08:50:22.085893 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-ca-certs\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:22.087599 master-0 kubenswrapper[27820]: I0320 08:50:22.087554 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 20 08:50:22.105556 master-0 kubenswrapper[27820]: I0320 08:50:22.105501 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 08:50:22.108675 master-0 kubenswrapper[27820]: I0320 08:50:22.108627 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-serving-cert\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.125420 master-0 kubenswrapper[27820]: I0320 08:50:22.125368 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 08:50:22.130099 master-0 kubenswrapper[27820]: I0320 08:50:22.130044 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ca56e37d-80ea-432b-a6d9-f4e904a40e10-encryption-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.146786 master-0 kubenswrapper[27820]: I0320 08:50:22.146667 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 08:50:22.164772 master-0 kubenswrapper[27820]: I0320 08:50:22.164705 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 08:50:22.186001 master-0 kubenswrapper[27820]: I0320 08:50:22.185946 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 08:50:22.191652 master-0 kubenswrapper[27820]: I0320 08:50:22.191609 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-audit-policies\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:22.204671 master-0 kubenswrapper[27820]: I0320 08:50:22.204626 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 08:50:22.212237 master-0 kubenswrapper[27820]: I0320 08:50:22.212189 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-etcd-serving-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.225088 master-0 kubenswrapper[27820]: I0320 08:50:22.225030 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 08:50:22.231847 master-0 kubenswrapper[27820]: I0320 08:50:22.231815 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-client\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:22.246104 master-0 kubenswrapper[27820]: I0320 08:50:22.246053 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 08:50:22.252994 master-0 kubenswrapper[27820]: I0320 08:50:22.252953 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-config\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.265494 master-0 kubenswrapper[27820]: I0320 08:50:22.265467 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 08:50:22.273320 master-0 kubenswrapper[27820]: I0320 08:50:22.273290 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-image-import-ca\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.285633 master-0 kubenswrapper[27820]: I0320 08:50:22.285581 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 08:50:22.305974 master-0 kubenswrapper[27820]: I0320 08:50:22.305901 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 08:50:22.310508 master-0 kubenswrapper[27820]: I0320 08:50:22.310472 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a6a187d-5b25-4d63-939e-c04e07369371-serving-cert\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:22.334285 master-0 kubenswrapper[27820]: I0320 08:50:22.334205 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 08:50:22.342157 master-0 kubenswrapper[27820]: I0320 08:50:22.342080 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca56e37d-80ea-432b-a6d9-f4e904a40e10-trusted-ca-bundle\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:22.345178 master-0 kubenswrapper[27820]: I0320 08:50:22.345135 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 08:50:22.352250 master-0 kubenswrapper[27820]: I0320 08:50:22.352192 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-webhook-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:22.352250 master-0 kubenswrapper[27820]: I0320 08:50:22.352192 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0cb6d987-4b59-4fd9-889a-3250c12a726c-apiservice-cert\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:22.367449 master-0 kubenswrapper[27820]: I0320 08:50:22.367354 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 08:50:22.377895 master-0 kubenswrapper[27820]: I0320 08:50:22.377822 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-etcd-serving-ca\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:22.385108 master-0 kubenswrapper[27820]: I0320 08:50:22.385053 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 08:50:22.388443 master-0 kubenswrapper[27820]: I0320 08:50:22.388403 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a6a187d-5b25-4d63-939e-c04e07369371-trusted-ca-bundle\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:22.403648 master-0 kubenswrapper[27820]: I0320 08:50:22.403459 27820 request.go:700] Waited for 1.003695548s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-marketplace/secrets?fieldSelector=metadata.name%3Dcommunity-operators-dockercfg-wsbtn&limit=500&resourceVersion=0 Mar 20 08:50:22.405580 master-0 kubenswrapper[27820]: I0320 08:50:22.405543 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-wsbtn" Mar 20 08:50:22.425859 master-0 kubenswrapper[27820]: I0320 08:50:22.425789 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 08:50:22.433128 master-0 kubenswrapper[27820]: I0320 08:50:22.433076 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bca4cc7c-839d-4877-b0aa-c07607fea404-serving-cert\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:22.446434 master-0 kubenswrapper[27820]: I0320 08:50:22.446349 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 08:50:22.451165 master-0 kubenswrapper[27820]: I0320 08:50:22.451095 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bca4cc7c-839d-4877-b0aa-c07607fea404-service-ca\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:22.465366 master-0 kubenswrapper[27820]: I0320 08:50:22.465317 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 08:50:22.485880 master-0 kubenswrapper[27820]: I0320 08:50:22.485821 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-qgtl7" Mar 20 08:50:22.500882 master-0 kubenswrapper[27820]: E0320 08:50:22.500826 27820 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.501062 master-0 kubenswrapper[27820]: E0320 08:50:22.500926 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.501062 master-0 kubenswrapper[27820]: E0320 08:50:22.500959 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls podName:56970553-2ac8-4cb5-a12a-b7c1e777c587 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.000933509 +0000 UTC m=+33.096142843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls") pod "cluster-samples-operator-85f7577d78-26fw9" (UID: "56970553-2ac8-4cb5-a12a-b7c1e777c587") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.501062 master-0 kubenswrapper[27820]: E0320 08:50:22.501045 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca podName:123f1ecb-cc03-462b-b76f-7251bf69d3d6 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.001022591 +0000 UTC m=+33.096231745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca") pod "node-exporter-rzg98" (UID: "123f1ecb-cc03-462b-b76f-7251bf69d3d6") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.502086 master-0 kubenswrapper[27820]: E0320 08:50:22.502050 27820 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.502139 master-0 kubenswrapper[27820]: E0320 08:50:22.502050 27820 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.502336 master-0 kubenswrapper[27820]: E0320 08:50:22.502108 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config podName:6163bd4b-dc83-4e83-8590-5ac4753bda1c nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.002095701 +0000 UTC m=+33.097304845 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7dff898856-vk98n" (UID: "6163bd4b-dc83-4e83-8590-5ac4753bda1c") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.502336 master-0 kubenswrapper[27820]: E0320 08:50:22.502308 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls podName:44bc88d8-9e01-4521-a704-85d9ca095baa nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.002239024 +0000 UTC m=+33.097448358 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls") pod "kube-state-metrics-7bbc969446-28l2x" (UID: "44bc88d8-9e01-4521-a704-85d9ca095baa") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.503325 master-0 kubenswrapper[27820]: E0320 08:50:22.503282 27820 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.503325 master-0 kubenswrapper[27820]: E0320 08:50:22.503305 27820 secret.go:189] Couldn't get secret openshift-monitoring/kube-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.503410 master-0 kubenswrapper[27820]: E0320 08:50:22.503340 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs podName:74bebf0b-6727-4959-8239-a9389e630524 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.003326825 +0000 UTC m=+33.098536169 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs") pod "multus-admission-controller-5dbbb8b86f-vhrdf" (UID: "74bebf0b-6727-4959-8239-a9389e630524") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.503410 master-0 kubenswrapper[27820]: E0320 08:50:22.503373 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config podName:44bc88d8-9e01-4521-a704-85d9ca095baa nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.003354696 +0000 UTC m=+33.098564030 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config") pod "kube-state-metrics-7bbc969446-28l2x" (UID: "44bc88d8-9e01-4521-a704-85d9ca095baa") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.503581 master-0 kubenswrapper[27820]: E0320 08:50:22.503548 27820 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.503643 master-0 kubenswrapper[27820]: E0320 08:50:22.503610 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls podName:0ad95adc-2e0f-4e95-94e7-66e6d240a930 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.003594993 +0000 UTC m=+33.098804147 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls") pod "prometheus-operator-6c8df6d4b-2lwqr" (UID: "0ad95adc-2e0f-4e95-94e7-66e6d240a930") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.504693 master-0 kubenswrapper[27820]: E0320 08:50:22.504662 27820 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.504755 master-0 kubenswrapper[27820]: E0320 08:50:22.504733 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token podName:a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.004720054 +0000 UTC m=+33.099929218 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token") pod "machine-config-server-6bd59" (UID: "a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.505392 master-0 kubenswrapper[27820]: I0320 08:50:22.505361 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-2vc5h" Mar 20 08:50:22.506757 master-0 kubenswrapper[27820]: E0320 08:50:22.506730 27820 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.506801 master-0 kubenswrapper[27820]: E0320 08:50:22.506731 27820 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.506832 master-0 kubenswrapper[27820]: E0320 08:50:22.506789 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert podName:2d125bc5-08ce-434a-bde7-0ba8fc0169ea nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.006773311 +0000 UTC m=+33.101982465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert") pod "cluster-autoscaler-operator-866dc4744-626qm" (UID: "2d125bc5-08ce-434a-bde7-0ba8fc0169ea") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.506832 master-0 kubenswrapper[27820]: E0320 08:50:22.506822 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config podName:14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.006810782 +0000 UTC m=+33.102019926 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config") pod "machine-approver-5c6485487f-897zl" (UID: "14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.508007 master-0 kubenswrapper[27820]: E0320 08:50:22.507958 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.508107 master-0 kubenswrapper[27820]: E0320 08:50:22.508086 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca podName:44bc88d8-9e01-4521-a704-85d9ca095baa nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.008061576 +0000 UTC m=+33.103270730 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca") pod "kube-state-metrics-7bbc969446-28l2x" (UID: "44bc88d8-9e01-4521-a704-85d9ca095baa") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.508142 master-0 kubenswrapper[27820]: E0320 08:50:22.508129 27820 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.508187 master-0 kubenswrapper[27820]: E0320 08:50:22.508172 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config podName:2d125bc5-08ce-434a-bde7-0ba8fc0169ea nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.008162549 +0000 UTC m=+33.103371693 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config") pod "cluster-autoscaler-operator-866dc4744-626qm" (UID: "2d125bc5-08ce-434a-bde7-0ba8fc0169ea") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.508294 master-0 kubenswrapper[27820]: E0320 08:50:22.508273 27820 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.508347 master-0 kubenswrapper[27820]: E0320 08:50:22.508330 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls podName:0f725c4a-234c-44e9-95f2-73f31d2b0fd3 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.008318673 +0000 UTC m=+33.103527817 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls") pod "machine-config-daemon-lxv4d" (UID: "0f725c4a-234c-44e9-95f2-73f31d2b0fd3") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.508392 master-0 kubenswrapper[27820]: E0320 08:50:22.508382 27820 projected.go:288] Couldn't get configMap openshift-catalogd/catalogd-trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.510270 master-0 kubenswrapper[27820]: E0320 08:50:22.510220 27820 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.510270 master-0 kubenswrapper[27820]: E0320 08:50:22.510231 27820 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.510347 master-0 kubenswrapper[27820]: E0320 08:50:22.510298 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls podName:a86af6a2-55a9-4c4e-8caf-1f51fedb23f5 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.010287729 +0000 UTC m=+33.105496873 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6f97756bc8-tkwh6" (UID: "a86af6a2-55a9-4c4e-8caf-1f51fedb23f5") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.510347 master-0 kubenswrapper[27820]: E0320 08:50:22.510320 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert podName:4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.010312099 +0000 UTC m=+33.105521243 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert") pod "ingress-canary-vzrlt" (UID: "4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.510778 master-0 kubenswrapper[27820]: E0320 08:50:22.510758 27820 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.510815 master-0 kubenswrapper[27820]: E0320 08:50:22.510810 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls podName:123f1ecb-cc03-462b-b76f-7251bf69d3d6 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.010799703 +0000 UTC m=+33.106008837 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls") pod "node-exporter-rzg98" (UID: "123f1ecb-cc03-462b-b76f-7251bf69d3d6") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.513348 master-0 kubenswrapper[27820]: E0320 08:50:22.513314 27820 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.513440 master-0 kubenswrapper[27820]: E0320 08:50:22.513414 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config podName:14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.013393885 +0000 UTC m=+33.108603199 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config") pod "machine-approver-5c6485487f-897zl" (UID: "14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.516637 master-0 kubenswrapper[27820]: E0320 08:50:22.516601 27820 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.516718 master-0 kubenswrapper[27820]: E0320 08:50:22.516695 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls podName:b6610936-e14a-4532-955c-ea1ee4222259 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.016676126 +0000 UTC m=+33.111885480 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls") pod "machine-config-operator-84d549f6d5-pkjcg" (UID: "b6610936-e14a-4532-955c-ea1ee4222259") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.520382 master-0 kubenswrapper[27820]: E0320 08:50:22.520363 27820 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.520433 master-0 kubenswrapper[27820]: E0320 08:50:22.520411 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs podName:08d9196b-b68f-421b-8754-bfbaa4020a97 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.02040132 +0000 UTC m=+33.115610464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs") pod "catalogd-controller-manager-6864dc98f7-tf2gj" (UID: "08d9196b-b68f-421b-8754-bfbaa4020a97") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.520524 master-0 kubenswrapper[27820]: E0320 08:50:22.520475 27820 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.520607 master-0 kubenswrapper[27820]: E0320 08:50:22.520584 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle podName:6d62448d-55f1-4bdc-85aa-09e7bdf766cc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.020559485 +0000 UTC m=+33.115768839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle") pod "insights-operator-68bf6ff9d6-c7zf4" (UID: "6d62448d-55f1-4bdc-85aa-09e7bdf766cc") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.522684 master-0 kubenswrapper[27820]: E0320 08:50:22.522650 27820 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.522746 master-0 kubenswrapper[27820]: E0320 08:50:22.522710 27820 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.522777 master-0 kubenswrapper[27820]: E0320 08:50:22.522759 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config podName:0ad95adc-2e0f-4e95-94e7-66e6d240a930 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.022740676 +0000 UTC m=+33.117949820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config") pod "prometheus-operator-6c8df6d4b-2lwqr" (UID: "0ad95adc-2e0f-4e95-94e7-66e6d240a930") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.522777 master-0 kubenswrapper[27820]: E0320 08:50:22.522686 27820 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.522854 master-0 kubenswrapper[27820]: E0320 08:50:22.522809 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls podName:80ddf0a4-e853-4de0-b540-81144dfdd31d nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.022788987 +0000 UTC m=+33.117998331 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls") pod "machine-api-operator-6fbb6cf6f9-lr7tb" (UID: "80ddf0a4-e853-4de0-b540-81144dfdd31d") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.522854 master-0 kubenswrapper[27820]: E0320 08:50:22.522840 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca podName:581a8be2-d16c-4fd8-b051-214bd60a2a91 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.022827858 +0000 UTC m=+33.118037222 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca") pod "cloud-credential-operator-744f9dbf77-6mrwl" (UID: "581a8be2-d16c-4fd8-b051-214bd60a2a91") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.525336 master-0 kubenswrapper[27820]: I0320 08:50:22.525313 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 20 08:50:22.527855 master-0 kubenswrapper[27820]: E0320 08:50:22.527817 27820 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.527855 master-0 kubenswrapper[27820]: E0320 08:50:22.527839 27820 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.527931 master-0 kubenswrapper[27820]: E0320 08:50:22.527875 27820 secret.go:189] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.527931 master-0 kubenswrapper[27820]: E0320 08:50:22.527907 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert podName:581a8be2-d16c-4fd8-b051-214bd60a2a91 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.027889978 +0000 UTC m=+33.123099132 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-744f9dbf77-6mrwl" (UID: "581a8be2-d16c-4fd8-b051-214bd60a2a91") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.527931 master-0 kubenswrapper[27820]: E0320 08:50:22.527926 27820 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.528040 master-0 kubenswrapper[27820]: E0320 08:50:22.527932 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls podName:14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.027922839 +0000 UTC m=+33.123131993 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls") pod "machine-approver-5c6485487f-897zl" (UID: "14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.528040 master-0 kubenswrapper[27820]: E0320 08:50:22.527959 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs podName:a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.02794843 +0000 UTC m=+33.123157584 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs") pod "machine-config-server-6bd59" (UID: "a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.528040 master-0 kubenswrapper[27820]: I0320 08:50:22.527953 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/14ef046f-b284-457f-ad7a-b7958cb82dd5-tls-certificates\") pod \"prometheus-operator-admission-webhook-69c6b55594-kh8bg\" (UID: \"14ef046f-b284-457f-ad7a-b7958cb82dd5\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:50:22.528040 master-0 kubenswrapper[27820]: E0320 08:50:22.527985 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls podName:f6a6e991-c861-48f5-bfde-78762a037343 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.02797009 +0000 UTC m=+33.123179234 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls") pod "machine-config-controller-b4f87c5b9-9fl6v" (UID: "f6a6e991-c861-48f5-bfde-78762a037343") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.528040 master-0 kubenswrapper[27820]: E0320 08:50:22.528001 27820 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.528040 master-0 kubenswrapper[27820]: E0320 08:50:22.528040 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images podName:80ddf0a4-e853-4de0-b540-81144dfdd31d nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.028028212 +0000 UTC m=+33.123237586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images") pod "machine-api-operator-6fbb6cf6f9-lr7tb" (UID: "80ddf0a4-e853-4de0-b540-81144dfdd31d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.529185 master-0 kubenswrapper[27820]: E0320 08:50:22.529157 27820 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.529252 master-0 kubenswrapper[27820]: E0320 08:50:22.529231 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert podName:e9c0293a-5340-4ebe-bc8f-43e78ba9f280 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.029215495 +0000 UTC m=+33.124424649 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-7d87854d6-848gc" (UID: "e9c0293a-5340-4ebe-bc8f-43e78ba9f280") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.530335 master-0 kubenswrapper[27820]: E0320 08:50:22.530307 27820 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.530383 master-0 kubenswrapper[27820]: E0320 08:50:22.530359 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images podName:6163bd4b-dc83-4e83-8590-5ac4753bda1c nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.030349607 +0000 UTC m=+33.125558751 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images") pod "cluster-cloud-controller-manager-operator-7dff898856-vk98n" (UID: "6163bd4b-dc83-4e83-8590-5ac4753bda1c") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.531420 master-0 kubenswrapper[27820]: E0320 08:50:22.531390 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/kube-state-metrics-custom-resource-state-configmap: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.531477 master-0 kubenswrapper[27820]: E0320 08:50:22.531461 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap podName:44bc88d8-9e01-4521-a704-85d9ca095baa nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.031448148 +0000 UTC m=+33.126657302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-custom-resource-state-configmap" (UniqueName: "kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap") pod "kube-state-metrics-7bbc969446-28l2x" (UID: "44bc88d8-9e01-4521-a704-85d9ca095baa") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.532571 master-0 kubenswrapper[27820]: E0320 08:50:22.532552 27820 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.532638 master-0 kubenswrapper[27820]: E0320 08:50:22.532610 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images podName:b6610936-e14a-4532-955c-ea1ee4222259 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.032597239 +0000 UTC m=+33.127806383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images") pod "machine-config-operator-84d549f6d5-pkjcg" (UID: "b6610936-e14a-4532-955c-ea1ee4222259") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.533640 master-0 kubenswrapper[27820]: E0320 08:50:22.533616 27820 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.533693 master-0 kubenswrapper[27820]: E0320 08:50:22.533680 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config podName:b6610936-e14a-4532-955c-ea1ee4222259 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.033667639 +0000 UTC m=+33.128876783 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config") pod "machine-config-operator-84d549f6d5-pkjcg" (UID: "b6610936-e14a-4532-955c-ea1ee4222259") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.533693 master-0 kubenswrapper[27820]: E0320 08:50:22.533619 27820 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.533757 master-0 kubenswrapper[27820]: E0320 08:50:22.533725 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config podName:0f725c4a-234c-44e9-95f2-73f31d2b0fd3 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.033715171 +0000 UTC m=+33.128924565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config") pod "machine-config-daemon-lxv4d" (UID: "0f725c4a-234c-44e9-95f2-73f31d2b0fd3") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.534920 master-0 kubenswrapper[27820]: E0320 08:50:22.534895 27820 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.534983 master-0 kubenswrapper[27820]: E0320 08:50:22.534961 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config podName:f6a6e991-c861-48f5-bfde-78762a037343 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.034947595 +0000 UTC m=+33.130156749 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config") pod "machine-config-controller-b4f87c5b9-9fl6v" (UID: "f6a6e991-c861-48f5-bfde-78762a037343") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.536097 master-0 kubenswrapper[27820]: E0320 08:50:22.536064 27820 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.536145 master-0 kubenswrapper[27820]: E0320 08:50:22.536133 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config podName:123f1ecb-cc03-462b-b76f-7251bf69d3d6 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.036121017 +0000 UTC m=+33.131330171 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config") pod "node-exporter-rzg98" (UID: "123f1ecb-cc03-462b-b76f-7251bf69d3d6") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.539615 master-0 kubenswrapper[27820]: E0320 08:50:22.539560 27820 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.539679 master-0 kubenswrapper[27820]: E0320 08:50:22.539665 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle podName:6d62448d-55f1-4bdc-85aa-09e7bdf766cc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.039649426 +0000 UTC m=+33.134858570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle") pod "insights-operator-68bf6ff9d6-c7zf4" (UID: "6d62448d-55f1-4bdc-85aa-09e7bdf766cc") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.540807 master-0 kubenswrapper[27820]: E0320 08:50:22.540771 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.540898 master-0 kubenswrapper[27820]: E0320 08:50:22.540869 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca podName:0ad95adc-2e0f-4e95-94e7-66e6d240a930 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.040847749 +0000 UTC m=+33.136057083 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca") pod "prometheus-operator-6c8df6d4b-2lwqr" (UID: "0ad95adc-2e0f-4e95-94e7-66e6d240a930") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.545790 master-0 kubenswrapper[27820]: I0320 08:50:22.545741 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-svsqb" Mar 20 08:50:22.547776 master-0 kubenswrapper[27820]: E0320 08:50:22.547741 27820 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.547850 master-0 kubenswrapper[27820]: E0320 08:50:22.547829 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert podName:6d62448d-55f1-4bdc-85aa-09e7bdf766cc nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.047807803 +0000 UTC m=+33.143016947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert") pod "insights-operator-68bf6ff9d6-c7zf4" (UID: "6d62448d-55f1-4bdc-85aa-09e7bdf766cc") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.548947 master-0 kubenswrapper[27820]: E0320 08:50:22.548926 27820 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.549009 master-0 kubenswrapper[27820]: E0320 08:50:22.548946 27820 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.549009 master-0 kubenswrapper[27820]: E0320 08:50:22.548974 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config podName:80ddf0a4-e853-4de0-b540-81144dfdd31d nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.048964835 +0000 UTC m=+33.144173979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config") pod "machine-api-operator-6fbb6cf6f9-lr7tb" (UID: "80ddf0a4-e853-4de0-b540-81144dfdd31d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.549066 master-0 kubenswrapper[27820]: E0320 08:50:22.549046 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls podName:6163bd4b-dc83-4e83-8590-5ac4753bda1c nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.049026787 +0000 UTC m=+33.144235931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7dff898856-vk98n" (UID: "6163bd4b-dc83-4e83-8590-5ac4753bda1c") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.572833 master-0 kubenswrapper[27820]: I0320 08:50:22.572776 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 20 08:50:22.581197 master-0 kubenswrapper[27820]: I0320 08:50:22.581164 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 20 08:50:22.584827 master-0 kubenswrapper[27820]: I0320 08:50:22.584791 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-5dbbb8b86f-vhrdf_74bebf0b-6727-4959-8239-a9389e630524/multus-admission-controller/0.log" Mar 20 08:50:22.585419 master-0 kubenswrapper[27820]: I0320 08:50:22.585386 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 20 08:50:22.585419 master-0 kubenswrapper[27820]: I0320 08:50:22.585410 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:50:22.585774 master-0 kubenswrapper[27820]: I0320 08:50:22.585600 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:22.586046 master-0 kubenswrapper[27820]: I0320 08:50:22.586025 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:22.587112 master-0 kubenswrapper[27820]: I0320 08:50:22.587085 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:50:22.587112 master-0 kubenswrapper[27820]: I0320 08:50:22.587105 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:50:22.598404 master-0 kubenswrapper[27820]: I0320 08:50:22.598333 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") pod \"74bebf0b-6727-4959-8239-a9389e630524\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " Mar 20 08:50:22.610882 master-0 kubenswrapper[27820]: I0320 08:50:22.605511 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 20 08:50:22.610882 master-0 kubenswrapper[27820]: I0320 08:50:22.605512 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "74bebf0b-6727-4959-8239-a9389e630524" (UID: "74bebf0b-6727-4959-8239-a9389e630524"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:22.610882 master-0 kubenswrapper[27820]: E0320 08:50:22.609499 27820 projected.go:194] Error preparing data for projected volume ca-certs for pod openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.610882 master-0 kubenswrapper[27820]: E0320 08:50:22.609719 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs podName:08d9196b-b68f-421b-8754-bfbaa4020a97 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.109671354 +0000 UTC m=+33.204880498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ca-certs" (UniqueName: "kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs") pod "catalogd-controller-manager-6864dc98f7-tf2gj" (UID: "08d9196b-b68f-421b-8754-bfbaa4020a97") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.624718 master-0 kubenswrapper[27820]: I0320 08:50:22.624648 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 20 08:50:22.645994 master-0 kubenswrapper[27820]: I0320 08:50:22.645938 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 20 08:50:22.650999 master-0 kubenswrapper[27820]: E0320 08:50:22.650947 27820 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.651172 master-0 kubenswrapper[27820]: E0320 08:50:22.651048 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls podName:d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.151026544 +0000 UTC m=+33.246235708 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls") pod "openshift-state-metrics-5dc6c74576-qclrg" (UID: "d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.652292 master-0 kubenswrapper[27820]: E0320 08:50:22.652153 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652292 master-0 kubenswrapper[27820]: E0320 08:50:22.652162 27820 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652292 master-0 kubenswrapper[27820]: E0320 08:50:22.652208 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca podName:d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.152195537 +0000 UTC m=+33.247404691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-client-ca" (UniqueName: "kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca") pod "openshift-state-metrics-5dc6c74576-qclrg" (UID: "d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652292 master-0 kubenswrapper[27820]: E0320 08:50:22.652257 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.152239459 +0000 UTC m=+33.247448603 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652332 27820 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652466 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca podName:240ba61a-e439-4f94-b9b3-7903b9b1bc05 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.152424234 +0000 UTC m=+33.247633378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca") pod "route-controller-manager-7d86cb9b59-smbxv" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652497 27820 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652549 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs podName:a88b1c81-02b5-4c85-9660-5f84c900a946 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.152542107 +0000 UTC m=+33.247751241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs") pod "multus-admission-controller-58c9f8fc64-kr9hd" (UID: "a88b1c81-02b5-4c85-9660-5f84c900a946") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652565 27820 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-kube-rbac-proxy-config: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652591 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/metrics-server-audit-profiles: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652586 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config podName:d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.152580758 +0000 UTC m=+33.247789902 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-kube-rbac-proxy-config" (UniqueName: "kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config") pod "openshift-state-metrics-5dc6c74576-qclrg" (UID: "d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652649 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles podName:04466971-127b-403e-af45-dad97b6e0c87 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.15263548 +0000 UTC m=+33.247844634 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-server-audit-profiles" (UniqueName: "kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles") pod "metrics-server-55d84d7794-56n4c" (UID: "04466971-127b-403e-af45-dad97b6e0c87") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652678 27820 secret.go:189] Couldn't get secret openshift-monitoring/metrics-client-certs: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.652818 master-0 kubenswrapper[27820]: E0320 08:50:22.652716 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs podName:04466971-127b-403e-af45-dad97b6e0c87 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.152701891 +0000 UTC m=+33.247911055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-client-certs" (UniqueName: "kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs") pod "metrics-server-55d84d7794-56n4c" (UID: "04466971-127b-403e-af45-dad97b6e0c87") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.653901 master-0 kubenswrapper[27820]: E0320 08:50:22.653657 27820 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-tls: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.653901 master-0 kubenswrapper[27820]: E0320 08:50:22.653687 27820 secret.go:189] Couldn't get secret openshift-monitoring/metrics-server-7i2lh8fo12r60: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.653901 master-0 kubenswrapper[27820]: E0320 08:50:22.653731 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls podName:04466971-127b-403e-af45-dad97b6e0c87 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.153716809 +0000 UTC m=+33.248926173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-metrics-server-tls" (UniqueName: "kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls") pod "metrics-server-55d84d7794-56n4c" (UID: "04466971-127b-403e-af45-dad97b6e0c87") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.653901 master-0 kubenswrapper[27820]: E0320 08:50:22.653836 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle podName:04466971-127b-403e-af45-dad97b6e0c87 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.15374716 +0000 UTC m=+33.248956294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca-bundle" (UniqueName: "kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle") pod "metrics-server-55d84d7794-56n4c" (UID: "04466971-127b-403e-af45-dad97b6e0c87") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.654844 master-0 kubenswrapper[27820]: E0320 08:50:22.654801 27820 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.654960 master-0 kubenswrapper[27820]: E0320 08:50:22.654863 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config podName:240ba61a-e439-4f94-b9b3-7903b9b1bc05 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.154847541 +0000 UTC m=+33.250056685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config") pod "route-controller-manager-7d86cb9b59-smbxv" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.655972 master-0 kubenswrapper[27820]: E0320 08:50:22.655841 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/kubelet-serving-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.655972 master-0 kubenswrapper[27820]: E0320 08:50:22.655890 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle podName:04466971-127b-403e-af45-dad97b6e0c87 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.15587778 +0000 UTC m=+33.251087114 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "configmap-kubelet-serving-ca-bundle" (UniqueName: "kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle") pod "metrics-server-55d84d7794-56n4c" (UID: "04466971-127b-403e-af45-dad97b6e0c87") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.655972 master-0 kubenswrapper[27820]: E0320 08:50:22.655902 27820 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.655972 master-0 kubenswrapper[27820]: E0320 08:50:22.655903 27820 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.655972 master-0 kubenswrapper[27820]: E0320 08:50:22.655921 27820 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.655972 master-0 kubenswrapper[27820]: E0320 08:50:22.655925 27820 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.656299 master-0 kubenswrapper[27820]: E0320 08:50:22.655945 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.155933311 +0000 UTC m=+33.251142455 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.656299 master-0 kubenswrapper[27820]: E0320 08:50:22.656016 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert podName:240ba61a-e439-4f94-b9b3-7903b9b1bc05 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.156005973 +0000 UTC m=+33.251215117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert") pod "route-controller-manager-7d86cb9b59-smbxv" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:22.656299 master-0 kubenswrapper[27820]: E0320 08:50:22.656056 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.156048834 +0000 UTC m=+33.251257978 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.656299 master-0 kubenswrapper[27820]: E0320 08:50:22.656067 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:23.156061785 +0000 UTC m=+33.251270929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:22.666738 master-0 kubenswrapper[27820]: I0320 08:50:22.666095 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-zs2v5" Mar 20 08:50:22.695880 master-0 kubenswrapper[27820]: I0320 08:50:22.695813 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 20 08:50:22.704427 master-0 kubenswrapper[27820]: I0320 08:50:22.704360 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 20 08:50:22.705116 master-0 kubenswrapper[27820]: I0320 08:50:22.705066 27820 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/74bebf0b-6727-4959-8239-a9389e630524-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:22.733288 master-0 kubenswrapper[27820]: I0320 08:50:22.733218 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 20 08:50:22.746042 master-0 kubenswrapper[27820]: I0320 08:50:22.745966 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 20 08:50:22.766212 master-0 kubenswrapper[27820]: I0320 08:50:22.766144 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 20 08:50:22.785299 master-0 kubenswrapper[27820]: I0320 08:50:22.785242 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 08:50:22.812478 master-0 kubenswrapper[27820]: I0320 08:50:22.812425 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-tp6tv" Mar 20 08:50:22.834891 master-0 kubenswrapper[27820]: I0320 08:50:22.834829 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t5n84" Mar 20 08:50:22.846832 master-0 kubenswrapper[27820]: I0320 08:50:22.846771 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 08:50:22.865656 master-0 kubenswrapper[27820]: I0320 08:50:22.865615 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 08:50:22.886257 master-0 kubenswrapper[27820]: I0320 08:50:22.886212 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 08:50:22.904812 master-0 kubenswrapper[27820]: I0320 08:50:22.904701 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-fxvgv" Mar 20 08:50:22.925299 master-0 kubenswrapper[27820]: I0320 08:50:22.925206 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 08:50:22.945700 master-0 kubenswrapper[27820]: I0320 08:50:22.945647 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 08:50:22.965973 master-0 kubenswrapper[27820]: I0320 08:50:22.965930 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 20 08:50:22.985498 master-0 kubenswrapper[27820]: I0320 08:50:22.985454 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-2zrgl" Mar 20 08:50:23.005016 master-0 kubenswrapper[27820]: I0320 08:50:23.004854 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 20 08:50:23.009472 master-0 kubenswrapper[27820]: I0320 08:50:23.009376 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.009472 master-0 kubenswrapper[27820]: I0320 08:50:23.009415 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:23.009919 master-0 kubenswrapper[27820]: I0320 08:50:23.009582 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:23.009919 master-0 kubenswrapper[27820]: I0320 08:50:23.009696 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:23.009919 master-0 kubenswrapper[27820]: I0320 08:50:23.009746 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:23.009919 master-0 kubenswrapper[27820]: I0320 08:50:23.009794 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.009919 master-0 kubenswrapper[27820]: I0320 08:50:23.009847 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:23.010130 master-0 kubenswrapper[27820]: I0320 08:50:23.009919 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-cert\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:23.010130 master-0 kubenswrapper[27820]: I0320 08:50:23.010004 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:23.010318 master-0 kubenswrapper[27820]: I0320 08:50:23.010201 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-auth-proxy-config\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:23.010911 master-0 kubenswrapper[27820]: I0320 08:50:23.010873 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:50:23.010911 master-0 kubenswrapper[27820]: I0320 08:50:23.010906 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:23.011006 master-0 kubenswrapper[27820]: I0320 08:50:23.010953 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:23.011130 master-0 kubenswrapper[27820]: I0320 08:50:23.011106 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/56970553-2ac8-4cb5-a12a-b7c1e777c587-samples-operator-tls\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:50:23.011174 master-0 kubenswrapper[27820]: I0320 08:50:23.011125 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.024984 master-0 kubenswrapper[27820]: I0320 08:50:23.024941 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 08:50:23.044853 master-0 kubenswrapper[27820]: I0320 08:50:23.044807 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-w58m2" Mar 20 08:50:23.064909 master-0 kubenswrapper[27820]: I0320 08:50:23.064871 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 08:50:23.084773 master-0 kubenswrapper[27820]: I0320 08:50:23.084735 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-mqln7" Mar 20 08:50:23.105214 master-0 kubenswrapper[27820]: I0320 08:50:23.105164 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 08:50:23.112322 master-0 kubenswrapper[27820]: I0320 08:50:23.112275 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:23.112430 master-0 kubenswrapper[27820]: I0320 08:50:23.112330 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:50:23.112430 master-0 kubenswrapper[27820]: I0320 08:50:23.112360 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:23.112430 master-0 kubenswrapper[27820]: I0320 08:50:23.112383 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:23.112430 master-0 kubenswrapper[27820]: I0320 08:50:23.112404 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:23.112736 master-0 kubenswrapper[27820]: I0320 08:50:23.112569 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.112736 master-0 kubenswrapper[27820]: I0320 08:50:23.112613 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:50:23.112736 master-0 kubenswrapper[27820]: I0320 08:50:23.112726 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.112753 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.112826 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.112846 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113005 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113053 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113074 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-images\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113124 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113145 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-trusted-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113164 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113305 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-serving-cert\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:23.113348 master-0 kubenswrapper[27820]: I0320 08:50:23.113336 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-config\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113587 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113633 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113678 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113728 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113757 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113796 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113834 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113869 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113905 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.113928 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-ca-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.114008 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b6610936-e14a-4532-955c-ea1ee4222259-proxy-tls\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:23.114029 master-0 kubenswrapper[27820]: I0320 08:50:23.114030 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:23.114425 master-0 kubenswrapper[27820]: I0320 08:50:23.114087 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:23.114425 master-0 kubenswrapper[27820]: I0320 08:50:23.114139 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/80ddf0a4-e853-4de0-b540-81144dfdd31d-machine-api-operator-tls\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:23.114425 master-0 kubenswrapper[27820]: I0320 08:50:23.114220 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/08d9196b-b68f-421b-8754-bfbaa4020a97-catalogserver-certs\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:23.114425 master-0 kubenswrapper[27820]: I0320 08:50:23.114317 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-service-ca-bundle\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:23.114425 master-0 kubenswrapper[27820]: I0320 08:50:23.114369 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:23.114561 master-0 kubenswrapper[27820]: I0320 08:50:23.114494 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:23.114591 master-0 kubenswrapper[27820]: I0320 08:50:23.114536 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:23.114620 master-0 kubenswrapper[27820]: I0320 08:50:23.114612 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:23.114833 master-0 kubenswrapper[27820]: I0320 08:50:23.114804 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80ddf0a4-e853-4de0-b540-81144dfdd31d-images\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:23.125071 master-0 kubenswrapper[27820]: I0320 08:50:23.125031 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 08:50:23.133700 master-0 kubenswrapper[27820]: I0320 08:50:23.133670 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b6610936-e14a-4532-955c-ea1ee4222259-auth-proxy-config\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:23.133759 master-0 kubenswrapper[27820]: I0320 08:50:23.133670 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-mcd-auth-proxy-config\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:23.134820 master-0 kubenswrapper[27820]: I0320 08:50:23.134786 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6a6e991-c861-48f5-bfde-78762a037343-mcc-auth-proxy-config\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:23.145659 master-0 kubenswrapper[27820]: I0320 08:50:23.145620 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 08:50:23.165003 master-0 kubenswrapper[27820]: I0320 08:50:23.164900 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 08:50:23.174945 master-0 kubenswrapper[27820]: I0320 08:50:23.174873 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:50:23.184876 master-0 kubenswrapper[27820]: I0320 08:50:23.184825 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 08:50:23.205643 master-0 kubenswrapper[27820]: I0320 08:50:23.205598 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-rqpg6" Mar 20 08:50:23.215863 master-0 kubenswrapper[27820]: I0320 08:50:23.215805 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:23.217492 master-0 kubenswrapper[27820]: I0320 08:50:23.217465 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.217543 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.217704 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.217814 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.217837 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.217856 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.217987 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:23.218093 master-0 kubenswrapper[27820]: I0320 08:50:23.218037 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:23.218463 master-0 kubenswrapper[27820]: I0320 08:50:23.218130 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:23.218463 master-0 kubenswrapper[27820]: I0320 08:50:23.218237 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:23.218463 master-0 kubenswrapper[27820]: I0320 08:50:23.218369 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:23.218463 master-0 kubenswrapper[27820]: I0320 08:50:23.218395 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:23.218463 master-0 kubenswrapper[27820]: I0320 08:50:23.218445 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:23.218604 master-0 kubenswrapper[27820]: I0320 08:50:23.218478 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:23.218604 master-0 kubenswrapper[27820]: I0320 08:50:23.218526 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:23.226342 master-0 kubenswrapper[27820]: I0320 08:50:23.225452 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 08:50:23.235640 master-0 kubenswrapper[27820]: I0320 08:50:23.235583 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-machine-approver-tls\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:23.244957 master-0 kubenswrapper[27820]: I0320 08:50:23.244908 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 08:50:23.255049 master-0 kubenswrapper[27820]: I0320 08:50:23.254990 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-auth-proxy-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:23.265182 master-0 kubenswrapper[27820]: I0320 08:50:23.265124 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 08:50:23.270812 master-0 kubenswrapper[27820]: I0320 08:50:23.270765 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-config\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:23.285646 master-0 kubenswrapper[27820]: I0320 08:50:23.285615 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 08:50:23.312805 master-0 kubenswrapper[27820]: I0320 08:50:23.312748 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 20 08:50:23.315589 master-0 kubenswrapper[27820]: I0320 08:50:23.315548 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/581a8be2-d16c-4fd8-b051-214bd60a2a91-cco-trusted-ca\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:23.325790 master-0 kubenswrapper[27820]: I0320 08:50:23.325738 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 08:50:23.331415 master-0 kubenswrapper[27820]: I0320 08:50:23.331372 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-proxy-tls\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:23.345100 master-0 kubenswrapper[27820]: I0320 08:50:23.345036 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-8kh8p" Mar 20 08:50:23.365228 master-0 kubenswrapper[27820]: I0320 08:50:23.365158 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 20 08:50:23.373618 master-0 kubenswrapper[27820]: I0320 08:50:23.373567 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/6163bd4b-dc83-4e83-8590-5ac4753bda1c-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:23.386788 master-0 kubenswrapper[27820]: I0320 08:50:23.386732 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:50:23.404112 master-0 kubenswrapper[27820]: I0320 08:50:23.404037 27820 request.go:700] Waited for 1.968167113s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dkube-rbac-proxy&limit=500&resourceVersion=0 Mar 20 08:50:23.405609 master-0 kubenswrapper[27820]: I0320 08:50:23.405552 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 20 08:50:23.412424 master-0 kubenswrapper[27820]: I0320 08:50:23.412372 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:23.425076 master-0 kubenswrapper[27820]: I0320 08:50:23.424957 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:50:23.445277 master-0 kubenswrapper[27820]: I0320 08:50:23.445197 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 20 08:50:23.453398 master-0 kubenswrapper[27820]: I0320 08:50:23.453355 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6163bd4b-dc83-4e83-8590-5ac4753bda1c-images\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:23.465170 master-0 kubenswrapper[27820]: I0320 08:50:23.465104 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-dl9qh" Mar 20 08:50:23.485562 master-0 kubenswrapper[27820]: I0320 08:50:23.485511 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-9mkkw" Mar 20 08:50:23.505451 master-0 kubenswrapper[27820]: I0320 08:50:23.505396 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 08:50:23.515985 master-0 kubenswrapper[27820]: I0320 08:50:23.515944 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-certs\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:23.525308 master-0 kubenswrapper[27820]: I0320 08:50:23.525235 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bcc6b" Mar 20 08:50:23.546335 master-0 kubenswrapper[27820]: I0320 08:50:23.546256 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 08:50:23.553306 master-0 kubenswrapper[27820]: I0320 08:50:23.553254 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f6a6e991-c861-48f5-bfde-78762a037343-proxy-tls\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:23.565133 master-0 kubenswrapper[27820]: I0320 08:50:23.565071 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 20 08:50:23.573122 master-0 kubenswrapper[27820]: I0320 08:50:23.573048 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/581a8be2-d16c-4fd8-b051-214bd60a2a91-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:23.586286 master-0 kubenswrapper[27820]: I0320 08:50:23.586199 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 08:50:23.590578 master-0 kubenswrapper[27820]: I0320 08:50:23.590538 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-node-bootstrap-token\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:23.605576 master-0 kubenswrapper[27820]: I0320 08:50:23.605515 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-hcrbd" Mar 20 08:50:23.625293 master-0 kubenswrapper[27820]: I0320 08:50:23.625229 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 08:50:23.646017 master-0 kubenswrapper[27820]: I0320 08:50:23.645959 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 20 08:50:23.650712 master-0 kubenswrapper[27820]: I0320 08:50:23.650666 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-metrics-client-ca\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.650712 master-0 kubenswrapper[27820]: I0320 08:50:23.650691 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-metrics-client-ca\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:23.651733 master-0 kubenswrapper[27820]: I0320 08:50:23.651707 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/123f1ecb-cc03-462b-b76f-7251bf69d3d6-metrics-client-ca\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:23.654134 master-0 kubenswrapper[27820]: I0320 08:50:23.654098 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/0ad95adc-2e0f-4e95-94e7-66e6d240a930-metrics-client-ca\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:23.665646 master-0 kubenswrapper[27820]: I0320 08:50:23.665604 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 20 08:50:23.685062 master-0 kubenswrapper[27820]: I0320 08:50:23.684939 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-54fh7" Mar 20 08:50:23.706096 master-0 kubenswrapper[27820]: I0320 08:50:23.706043 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 08:50:23.726535 master-0 kubenswrapper[27820]: I0320 08:50:23.726462 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-dbtrl" Mar 20 08:50:23.746281 master-0 kubenswrapper[27820]: I0320 08:50:23.746174 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 20 08:50:23.766254 master-0 kubenswrapper[27820]: I0320 08:50:23.766205 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 08:50:23.774155 master-0 kubenswrapper[27820]: I0320 08:50:23.774109 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-cert\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:50:23.785501 master-0 kubenswrapper[27820]: I0320 08:50:23.785447 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 20 08:50:23.795072 master-0 kubenswrapper[27820]: I0320 08:50:23.795001 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-tls\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:23.805718 master-0 kubenswrapper[27820]: I0320 08:50:23.805650 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 20 08:50:23.813363 master-0 kubenswrapper[27820]: I0320 08:50:23.813308 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/123f1ecb-cc03-462b-b76f-7251bf69d3d6-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:23.825839 master-0 kubenswrapper[27820]: I0320 08:50:23.825774 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-vkksc" Mar 20 08:50:23.845351 master-0 kubenswrapper[27820]: I0320 08:50:23.845290 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 20 08:50:23.850199 master-0 kubenswrapper[27820]: I0320 08:50:23.850132 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-tls\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:23.865161 master-0 kubenswrapper[27820]: I0320 08:50:23.865096 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vlmsv" Mar 20 08:50:23.886104 master-0 kubenswrapper[27820]: I0320 08:50:23.886034 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 20 08:50:23.898296 master-0 kubenswrapper[27820]: I0320 08:50:23.895034 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/0ad95adc-2e0f-4e95-94e7-66e6d240a930-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:23.906064 master-0 kubenswrapper[27820]: I0320 08:50:23.906004 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 20 08:50:23.910142 master-0 kubenswrapper[27820]: I0320 08:50:23.910085 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.926456 master-0 kubenswrapper[27820]: I0320 08:50:23.926401 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 20 08:50:23.932293 master-0 kubenswrapper[27820]: I0320 08:50:23.932211 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-tls\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.945993 master-0 kubenswrapper[27820]: I0320 08:50:23.945861 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 20 08:50:23.953435 master-0 kubenswrapper[27820]: I0320 08:50:23.953383 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:23.985343 master-0 kubenswrapper[27820]: I0320 08:50:23.985292 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 20 08:50:23.989043 master-0 kubenswrapper[27820]: I0320 08:50:23.988996 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:24.006761 master-0 kubenswrapper[27820]: I0320 08:50:24.006705 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-6825q" Mar 20 08:50:24.025545 master-0 kubenswrapper[27820]: I0320 08:50:24.025486 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 20 08:50:24.028247 master-0 kubenswrapper[27820]: I0320 08:50:24.028206 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-tls\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:24.045095 master-0 kubenswrapper[27820]: I0320 08:50:24.045056 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 20 08:50:24.048484 master-0 kubenswrapper[27820]: I0320 08:50:24.048445 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:24.065558 master-0 kubenswrapper[27820]: I0320 08:50:24.065508 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-kwmwv" Mar 20 08:50:24.085368 master-0 kubenswrapper[27820]: I0320 08:50:24.085307 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7i2lh8fo12r60" Mar 20 08:50:24.088961 master-0 kubenswrapper[27820]: I0320 08:50:24.088912 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:24.105648 master-0 kubenswrapper[27820]: I0320 08:50:24.105583 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 20 08:50:24.109443 master-0 kubenswrapper[27820]: I0320 08:50:24.109401 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:24.125504 master-0 kubenswrapper[27820]: I0320 08:50:24.125436 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 20 08:50:24.128945 master-0 kubenswrapper[27820]: I0320 08:50:24.128903 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:24.144926 master-0 kubenswrapper[27820]: I0320 08:50:24.144859 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 20 08:50:24.148674 master-0 kubenswrapper[27820]: I0320 08:50:24.148626 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:24.165200 master-0 kubenswrapper[27820]: I0320 08:50:24.165148 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:50:24.185722 master-0 kubenswrapper[27820]: I0320 08:50:24.185657 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-c9tw2" Mar 20 08:50:24.204848 master-0 kubenswrapper[27820]: I0320 08:50:24.204694 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:50:24.209327 master-0 kubenswrapper[27820]: I0320 08:50:24.209252 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:24.218335 master-0 kubenswrapper[27820]: E0320 08:50:24.218255 27820 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.218335 master-0 kubenswrapper[27820]: E0320 08:50:24.218315 27820 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:24.218639 master-0 kubenswrapper[27820]: E0320 08:50:24.218378 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca podName:240ba61a-e439-4f94-b9b3-7903b9b1bc05 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.218357475 +0000 UTC m=+35.313566619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca") pod "route-controller-manager-7d86cb9b59-smbxv" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.218639 master-0 kubenswrapper[27820]: E0320 08:50:24.218410 27820 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.218639 master-0 kubenswrapper[27820]: E0320 08:50:24.218434 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs podName:a88b1c81-02b5-4c85-9660-5f84c900a946 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.218406957 +0000 UTC m=+35.313616111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs") pod "multus-admission-controller-58c9f8fc64-kr9hd" (UID: "a88b1c81-02b5-4c85-9660-5f84c900a946") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:24.218639 master-0 kubenswrapper[27820]: E0320 08:50:24.218506 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config podName:240ba61a-e439-4f94-b9b3-7903b9b1bc05 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.218484179 +0000 UTC m=+35.313693323 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config") pod "route-controller-manager-7d86cb9b59-smbxv" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.219068 master-0 kubenswrapper[27820]: E0320 08:50:24.219022 27820 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.219068 master-0 kubenswrapper[27820]: E0320 08:50:24.219054 27820 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:24.219188 master-0 kubenswrapper[27820]: E0320 08:50:24.219058 27820 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.219188 master-0 kubenswrapper[27820]: E0320 08:50:24.219033 27820 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.219188 master-0 kubenswrapper[27820]: E0320 08:50:24.219111 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.219090636 +0000 UTC m=+35.314300000 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.219188 master-0 kubenswrapper[27820]: E0320 08:50:24.219139 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.219124267 +0000 UTC m=+35.314333421 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.219188 master-0 kubenswrapper[27820]: E0320 08:50:24.219167 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert podName:240ba61a-e439-4f94-b9b3-7903b9b1bc05 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.219160438 +0000 UTC m=+35.314369592 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert") pod "route-controller-manager-7d86cb9b59-smbxv" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05") : failed to sync secret cache: timed out waiting for the condition Mar 20 08:50:24.219188 master-0 kubenswrapper[27820]: E0320 08:50:24.219183 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles podName:41ac891d-b41d-43c4-be46-35f39671477a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:25.219175548 +0000 UTC m=+35.314384702 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles") pod "controller-manager-bc85986b9-8p79x" (UID: "41ac891d-b41d-43c4-be46-35f39671477a") : failed to sync configmap cache: timed out waiting for the condition Mar 20 08:50:24.224833 master-0 kubenswrapper[27820]: I0320 08:50:24.224785 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:50:24.245375 master-0 kubenswrapper[27820]: I0320 08:50:24.245307 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:50:24.269996 master-0 kubenswrapper[27820]: I0320 08:50:24.269938 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:50:24.285979 master-0 kubenswrapper[27820]: I0320 08:50:24.285930 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:50:24.305102 master-0 kubenswrapper[27820]: I0320 08:50:24.305060 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-6tblf" Mar 20 08:50:24.325524 master-0 kubenswrapper[27820]: I0320 08:50:24.325473 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:50:24.345709 master-0 kubenswrapper[27820]: I0320 08:50:24.345655 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:50:24.371032 master-0 kubenswrapper[27820]: I0320 08:50:24.370980 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:50:24.386003 master-0 kubenswrapper[27820]: I0320 08:50:24.385956 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:50:24.405049 master-0 kubenswrapper[27820]: I0320 08:50:24.404983 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:50:24.424502 master-0 kubenswrapper[27820]: I0320 08:50:24.424445 27820 request.go:700] Waited for 2.967827939s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-multus/secrets?fieldSelector=metadata.name%3Dmultus-admission-controller-secret&limit=500&resourceVersion=0 Mar 20 08:50:24.426518 master-0 kubenswrapper[27820]: I0320 08:50:24.426465 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 08:50:24.448474 master-0 kubenswrapper[27820]: I0320 08:50:24.448435 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-p2lrx" Mar 20 08:50:24.472563 master-0 kubenswrapper[27820]: E0320 08:50:24.472451 27820 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:24.472947 master-0 kubenswrapper[27820]: I0320 08:50:24.472918 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:24.491592 master-0 kubenswrapper[27820]: W0320 08:50:24.491526 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36f4a012744c6465102d09cc67ac63e6.slice/crio-7d004c7866a0d2d626abc09d219b312fec3c6430f2d64295191492675914aa50 WatchSource:0}: Error finding container 7d004c7866a0d2d626abc09d219b312fec3c6430f2d64295191492675914aa50: Status 404 returned error can't find the container with id 7d004c7866a0d2d626abc09d219b312fec3c6430f2d64295191492675914aa50 Mar 20 08:50:24.498233 master-0 kubenswrapper[27820]: I0320 08:50:24.498169 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rf9kc\" (UniqueName: \"kubernetes.io/projected/f6a6e991-c861-48f5-bfde-78762a037343-kube-api-access-rf9kc\") pod \"machine-config-controller-b4f87c5b9-9fl6v\" (UID: \"f6a6e991-c861-48f5-bfde-78762a037343\") " pod="openshift-machine-config-operator/machine-config-controller-b4f87c5b9-9fl6v" Mar 20 08:50:24.517762 master-0 kubenswrapper[27820]: I0320 08:50:24.517718 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8plf\" (UniqueName: \"kubernetes.io/projected/b6610936-e14a-4532-955c-ea1ee4222259-kube-api-access-v8plf\") pod \"machine-config-operator-84d549f6d5-pkjcg\" (UID: \"b6610936-e14a-4532-955c-ea1ee4222259\") " pod="openshift-machine-config-operator/machine-config-operator-84d549f6d5-pkjcg" Mar 20 08:50:24.536683 master-0 kubenswrapper[27820]: I0320 08:50:24.536631 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4w7k\" (UniqueName: \"kubernetes.io/projected/bb7b640f-22be-41a9-8ab2-e7ae817e2eb0-kube-api-access-l4w7k\") pod \"operator-controller-controller-manager-57777556ff-rnnfz\" (UID: \"bb7b640f-22be-41a9-8ab2-e7ae817e2eb0\") " pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:24.558487 master-0 kubenswrapper[27820]: I0320 08:50:24.558408 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4sfm\" (UniqueName: \"kubernetes.io/projected/210dd7f0-d1c0-407a-b89b-f11ef605e5df-kube-api-access-w4sfm\") pod \"ovnkube-control-plane-57f769d897-crrdk\" (UID: \"210dd7f0-d1c0-407a-b89b-f11ef605e5df\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57f769d897-crrdk" Mar 20 08:50:24.576664 master-0 kubenswrapper[27820]: I0320 08:50:24.576613 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/71ca96e8-5108-455c-bb3c-17977d38e912-kube-api-access\") pod \"kube-controller-manager-operator-ff989d6cc-wbfrm\" (UID: \"71ca96e8-5108-455c-bb3c-17977d38e912\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-ff989d6cc-wbfrm" Mar 20 08:50:24.602257 master-0 kubenswrapper[27820]: I0320 08:50:24.599697 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86j8\" (UniqueName: \"kubernetes.io/projected/6d26f719-43b9-4c1c-9a54-ff800177db68-kube-api-access-v86j8\") pod \"cluster-node-tuning-operator-598fbc5f8f-zxgdk\" (UID: \"6d26f719-43b9-4c1c-9a54-ff800177db68\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-598fbc5f8f-zxgdk" Mar 20 08:50:24.621377 master-0 kubenswrapper[27820]: I0320 08:50:24.621328 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btwhr\" (UniqueName: \"kubernetes.io/projected/a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d-kube-api-access-btwhr\") pod \"machine-config-server-6bd59\" (UID: \"a79bf8fb-19fb-4881-b9e3-b5b21fde0e1d\") " pod="openshift-machine-config-operator/machine-config-server-6bd59" Mar 20 08:50:24.637748 master-0 kubenswrapper[27820]: I0320 08:50:24.637696 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w8xs\" (UniqueName: \"kubernetes.io/projected/64d09f81-5fb6-462a-a736-5649779a6b1a-kube-api-access-7w8xs\") pod \"redhat-marketplace-hj5tl\" (UID: \"64d09f81-5fb6-462a-a736-5649779a6b1a\") " pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:24.657386 master-0 kubenswrapper[27820]: I0320 08:50:24.657339 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bxn6\" (UniqueName: \"kubernetes.io/projected/890a6c24-1dbb-4331-952b-5712ac00788e-kube-api-access-7bxn6\") pod \"migrator-8487694857-ltk2p\" (UID: \"890a6c24-1dbb-4331-952b-5712ac00788e\") " pod="openshift-kube-storage-version-migrator/migrator-8487694857-ltk2p" Mar 20 08:50:24.675961 master-0 kubenswrapper[27820]: I0320 08:50:24.675910 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqv5\" (UniqueName: \"kubernetes.io/projected/08d9196b-b68f-421b-8754-bfbaa4020a97-kube-api-access-tvqv5\") pod \"catalogd-controller-manager-6864dc98f7-tf2gj\" (UID: \"08d9196b-b68f-421b-8754-bfbaa4020a97\") " pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:24.705327 master-0 kubenswrapper[27820]: I0320 08:50:24.705057 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r8zt\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-kube-api-access-5r8zt\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:24.722831 master-0 kubenswrapper[27820]: I0320 08:50:24.722665 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgl8m\" (UniqueName: \"kubernetes.io/projected/2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c-kube-api-access-rgl8m\") pod \"openshift-apiserver-operator-d65958b8-ntdqc\" (UID: \"2ef691ec-d1f0-4c97-97e4-4aa7a6c0a86c\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-d65958b8-ntdqc" Mar 20 08:50:24.738542 master-0 kubenswrapper[27820]: I0320 08:50:24.738449 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9mbs\" (UniqueName: \"kubernetes.io/projected/6d62448d-55f1-4bdc-85aa-09e7bdf766cc-kube-api-access-n9mbs\") pod \"insights-operator-68bf6ff9d6-c7zf4\" (UID: \"6d62448d-55f1-4bdc-85aa-09e7bdf766cc\") " pod="openshift-insights/insights-operator-68bf6ff9d6-c7zf4" Mar 20 08:50:24.759930 master-0 kubenswrapper[27820]: I0320 08:50:24.759899 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-br4bc\" (UniqueName: \"kubernetes.io/projected/6a6a187d-5b25-4d63-939e-c04e07369371-kube-api-access-br4bc\") pod \"apiserver-5595498c49-hrfrr\" (UID: \"6a6a187d-5b25-4d63-939e-c04e07369371\") " pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:24.777442 master-0 kubenswrapper[27820]: I0320 08:50:24.777396 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56bt6\" (UniqueName: \"kubernetes.io/projected/f202273a-b111-46ce-b404-7e481d2c7ff9-kube-api-access-56bt6\") pod \"cluster-baremetal-operator-6f69995874-b25f2\" (UID: \"f202273a-b111-46ce-b404-7e481d2c7ff9\") " pod="openshift-machine-api/cluster-baremetal-operator-6f69995874-b25f2" Mar 20 08:50:24.797936 master-0 kubenswrapper[27820]: I0320 08:50:24.797883 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkgv\" (UniqueName: \"kubernetes.io/projected/5707066a-bd66-41bc-8cea-cff1630ab5ee-kube-api-access-2dkgv\") pod \"cluster-monitoring-operator-58845fbb57-6vgt6\" (UID: \"5707066a-bd66-41bc-8cea-cff1630ab5ee\") " pod="openshift-monitoring/cluster-monitoring-operator-58845fbb57-6vgt6" Mar 20 08:50:24.825631 master-0 kubenswrapper[27820]: I0320 08:50:24.825569 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qqcw\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-kube-api-access-8qqcw\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:24.848314 master-0 kubenswrapper[27820]: I0320 08:50:24.845468 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcgqr\" (UniqueName: \"kubernetes.io/projected/acbaba45-12d9-40b9-818c-4b091d7929b1-kube-api-access-kcgqr\") pod \"csi-snapshot-controller-operator-5f5d689c6b-b5lg6\" (UID: \"acbaba45-12d9-40b9-818c-4b091d7929b1\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5f5d689c6b-b5lg6" Mar 20 08:50:24.857910 master-0 kubenswrapper[27820]: I0320 08:50:24.857856 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swxwt\" (UniqueName: \"kubernetes.io/projected/9817d1ec-3d7c-49fb-8e41-26f5727ef9e8-kube-api-access-swxwt\") pod \"network-operator-7bd846bfc4-x4w25\" (UID: \"9817d1ec-3d7c-49fb-8e41-26f5727ef9e8\") " pod="openshift-network-operator/network-operator-7bd846bfc4-x4w25" Mar 20 08:50:24.880581 master-0 kubenswrapper[27820]: I0320 08:50:24.880526 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvxjl\" (UniqueName: \"kubernetes.io/projected/1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf-kube-api-access-fvxjl\") pod \"node-resolver-j7ngf\" (UID: \"1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf\") " pod="openshift-dns/node-resolver-j7ngf" Mar 20 08:50:24.902064 master-0 kubenswrapper[27820]: I0320 08:50:24.902019 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdqzn\" (UniqueName: \"kubernetes.io/projected/44bc88d8-9e01-4521-a704-85d9ca095baa-kube-api-access-hdqzn\") pod \"kube-state-metrics-7bbc969446-28l2x\" (UID: \"44bc88d8-9e01-4521-a704-85d9ca095baa\") " pod="openshift-monitoring/kube-state-metrics-7bbc969446-28l2x" Mar 20 08:50:24.921591 master-0 kubenswrapper[27820]: I0320 08:50:24.921384 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j527\" (UniqueName: \"kubernetes.io/projected/9ce482dc-d0ac-40bc-9058-a1cfdc81575e-kube-api-access-9j527\") pod \"catalog-operator-68f85b4d6c-hdw98\" (UID: \"9ce482dc-d0ac-40bc-9058-a1cfdc81575e\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:50:24.935856 master-0 kubenswrapper[27820]: I0320 08:50:24.935796 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmssd\" (UniqueName: \"kubernetes.io/projected/9635cdae-0983-4c97-b3ed-dc7a785b1bb6-kube-api-access-zmssd\") pod \"redhat-operators-bt7wn\" (UID: \"9635cdae-0983-4c97-b3ed-dc7a785b1bb6\") " pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:24.962792 master-0 kubenswrapper[27820]: I0320 08:50:24.962734 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x82xz\" (UniqueName: \"kubernetes.io/projected/a86af6a2-55a9-4c4e-8caf-1f51fedb23f5-kube-api-access-x82xz\") pod \"control-plane-machine-set-operator-6f97756bc8-tkwh6\" (UID: \"a86af6a2-55a9-4c4e-8caf-1f51fedb23f5\") " pod="openshift-machine-api/control-plane-machine-set-operator-6f97756bc8-tkwh6" Mar 20 08:50:24.976783 master-0 kubenswrapper[27820]: I0320 08:50:24.976739 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbnx\" (UniqueName: \"kubernetes.io/projected/56970553-2ac8-4cb5-a12a-b7c1e777c587-kube-api-access-zrbnx\") pod \"cluster-samples-operator-85f7577d78-26fw9\" (UID: \"56970553-2ac8-4cb5-a12a-b7c1e777c587\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-85f7577d78-26fw9" Mar 20 08:50:25.010298 master-0 kubenswrapper[27820]: I0320 08:50:25.010223 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlgd7\" (UniqueName: \"kubernetes.io/projected/14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc-kube-api-access-hlgd7\") pod \"machine-approver-5c6485487f-897zl\" (UID: \"14fe2256-ce3d-4c70-9e6b-0e3c7d4d96dc\") " pod="openshift-cluster-machine-approver/machine-approver-5c6485487f-897zl" Mar 20 08:50:25.042310 master-0 kubenswrapper[27820]: I0320 08:50:25.037798 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lnpz\" (UniqueName: \"kubernetes.io/projected/0ad95adc-2e0f-4e95-94e7-66e6d240a930-kube-api-access-5lnpz\") pod \"prometheus-operator-6c8df6d4b-2lwqr\" (UID: \"0ad95adc-2e0f-4e95-94e7-66e6d240a930\") " pod="openshift-monitoring/prometheus-operator-6c8df6d4b-2lwqr" Mar 20 08:50:25.044451 master-0 kubenswrapper[27820]: I0320 08:50:25.044417 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmb9v\" (UniqueName: \"kubernetes.io/projected/2d125bc5-08ce-434a-bde7-0ba8fc0169ea-kube-api-access-hmb9v\") pod \"cluster-autoscaler-operator-866dc4744-626qm\" (UID: \"2d125bc5-08ce-434a-bde7-0ba8fc0169ea\") " pod="openshift-machine-api/cluster-autoscaler-operator-866dc4744-626qm" Mar 20 08:50:25.058960 master-0 kubenswrapper[27820]: I0320 08:50:25.058326 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnk9k\" (UniqueName: \"kubernetes.io/projected/8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072-kube-api-access-hnk9k\") pod \"authentication-operator-5885bfd7f4-tdpfq\" (UID: \"8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072\") " pod="openshift-authentication-operator/authentication-operator-5885bfd7f4-tdpfq" Mar 20 08:50:25.078918 master-0 kubenswrapper[27820]: I0320 08:50:25.078865 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5kc\" (UniqueName: \"kubernetes.io/projected/ca6e644f-c53b-41dd-a16f-9fb9997533dd-kube-api-access-nf5kc\") pod \"network-check-target-j9jjm\" (UID: \"ca6e644f-c53b-41dd-a16f-9fb9997533dd\") " pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:50:25.096815 master-0 kubenswrapper[27820]: I0320 08:50:25.096783 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xv94\" (UniqueName: \"kubernetes.io/projected/09a5682c-4f13-4b8c-8179-3e6dfa8f98db-kube-api-access-8xv94\") pod \"service-ca-operator-b865698dc-zxwrd\" (UID: \"09a5682c-4f13-4b8c-8179-3e6dfa8f98db\") " pod="openshift-service-ca-operator/service-ca-operator-b865698dc-zxwrd" Mar 20 08:50:25.127398 master-0 kubenswrapper[27820]: I0320 08:50:25.127352 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5hsj\" (UniqueName: \"kubernetes.io/projected/7949621e-4da6-4e43-a1f3-2ef303bf6aa6-kube-api-access-j5hsj\") pod \"multus-pxqwj\" (UID: \"7949621e-4da6-4e43-a1f3-2ef303bf6aa6\") " pod="openshift-multus/multus-pxqwj" Mar 20 08:50:25.140132 master-0 kubenswrapper[27820]: I0320 08:50:25.140079 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dkqm\" (UniqueName: \"kubernetes.io/projected/1746482a-d1a3-4eac-8bc9-643b6af75163-kube-api-access-2dkqm\") pod \"service-ca-79bc6b8d76-trbxh\" (UID: \"1746482a-d1a3-4eac-8bc9-643b6af75163\") " pod="openshift-service-ca/service-ca-79bc6b8d76-trbxh" Mar 20 08:50:25.159439 master-0 kubenswrapper[27820]: I0320 08:50:25.159401 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5v7l\" (UniqueName: \"kubernetes.io/projected/20ff930f-ec0d-40ed-a879-1546691f685d-kube-api-access-d5v7l\") pod \"kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs\" (UID: \"20ff930f-ec0d-40ed-a879-1546691f685d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-6bb5bfb6fd-kxxrs" Mar 20 08:50:25.176586 master-0 kubenswrapper[27820]: I0320 08:50:25.176532 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsj2w\" (UniqueName: \"kubernetes.io/projected/ff2dfe9d-2834-43cb-b093-0831b2b87131-kube-api-access-zsj2w\") pod \"dns-operator-9c5679d8f-xfns6\" (UID: \"ff2dfe9d-2834-43cb-b093-0831b2b87131\") " pod="openshift-dns-operator/dns-operator-9c5679d8f-xfns6" Mar 20 08:50:25.205152 master-0 kubenswrapper[27820]: I0320 08:50:25.205118 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5kbh\" (UniqueName: \"kubernetes.io/projected/e9425526-9f51-4302-a19d-a8107f56c582-kube-api-access-z5kbh\") pod \"cluster-olm-operator-67dcd4998-gwtrl\" (UID: \"e9425526-9f51-4302-a19d-a8107f56c582\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-67dcd4998-gwtrl" Mar 20 08:50:25.219087 master-0 kubenswrapper[27820]: I0320 08:50:25.219040 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmk45\" (UniqueName: \"kubernetes.io/projected/4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc-kube-api-access-mmk45\") pod \"ingress-canary-vzrlt\" (UID: \"4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc\") " pod="openshift-ingress-canary/ingress-canary-vzrlt" Mar 20 08:50:25.236500 master-0 kubenswrapper[27820]: I0320 08:50:25.236391 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqgkl\" (UniqueName: \"kubernetes.io/projected/a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9-kube-api-access-rqgkl\") pod \"csi-snapshot-controller-64854d9cff-gng67\" (UID: \"a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-64854d9cff-gng67" Mar 20 08:50:25.256090 master-0 kubenswrapper[27820]: I0320 08:50:25.256057 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.256195 master-0 kubenswrapper[27820]: I0320 08:50:25.256099 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.256195 master-0 kubenswrapper[27820]: I0320 08:50:25.256138 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256480 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256555 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256731 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256745 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256788 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256813 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.256900 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.257008 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.257094 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.257250 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.257331 master-0 kubenswrapper[27820]: I0320 08:50:25.257303 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a88b1c81-02b5-4c85-9660-5f84c900a946-webhook-certs\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:50:25.259118 master-0 kubenswrapper[27820]: I0320 08:50:25.259102 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzprw\" (UniqueName: \"kubernetes.io/projected/61ab4d32-c732-4be5-aa85-a2e1dd21cb60-kube-api-access-lzprw\") pod \"openshift-controller-manager-operator-8c94f4649-w8c24\" (UID: \"61ab4d32-c732-4be5-aa85-a2e1dd21cb60\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8c94f4649-w8c24" Mar 20 08:50:25.288883 master-0 kubenswrapper[27820]: I0320 08:50:25.288360 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sglvd\" (UniqueName: \"kubernetes.io/projected/22ff82cf-0d7d-4955-9b7c-97757acbc021-kube-api-access-sglvd\") pod \"multus-additional-cni-plugins-x7vrg\" (UID: \"22ff82cf-0d7d-4955-9b7c-97757acbc021\") " pod="openshift-multus/multus-additional-cni-plugins-x7vrg" Mar 20 08:50:25.296728 master-0 kubenswrapper[27820]: I0320 08:50:25.296695 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns97v\" (UniqueName: \"kubernetes.io/projected/e9c0293a-5340-4ebe-bc8f-43e78ba9f280-kube-api-access-ns97v\") pod \"cluster-storage-operator-7d87854d6-848gc\" (UID: \"e9c0293a-5340-4ebe-bc8f-43e78ba9f280\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-7d87854d6-848gc" Mar 20 08:50:25.331567 master-0 kubenswrapper[27820]: I0320 08:50:25.330734 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2j6m\" (UniqueName: \"kubernetes.io/projected/d26a4fce-8eed-44d0-96a3-40ffd0b336a6-kube-api-access-s2j6m\") pod \"ovnkube-node-bvndl\" (UID: \"d26a4fce-8eed-44d0-96a3-40ffd0b336a6\") " pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:25.352191 master-0 kubenswrapper[27820]: I0320 08:50:25.352139 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/22f85e98-eb36-46b2-ab5d-7c21e060cba5-bound-sa-token\") pod \"ingress-operator-66b84d69b-dknxr\" (UID: \"22f85e98-eb36-46b2-ab5d-7c21e060cba5\") " pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" Mar 20 08:50:25.362608 master-0 kubenswrapper[27820]: I0320 08:50:25.361797 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r22fm\" (UniqueName: \"kubernetes.io/projected/0f725c4a-234c-44e9-95f2-73f31d2b0fd3-kube-api-access-r22fm\") pod \"machine-config-daemon-lxv4d\" (UID: \"0f725c4a-234c-44e9-95f2-73f31d2b0fd3\") " pod="openshift-machine-config-operator/machine-config-daemon-lxv4d" Mar 20 08:50:25.379570 master-0 kubenswrapper[27820]: I0320 08:50:25.379524 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncztx\" (UniqueName: \"kubernetes.io/projected/06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047-kube-api-access-ncztx\") pod \"network-check-source-b4bf74f6-nnjv9\" (UID: \"06bf3fa7-4a9c-4e7f-aa6b-4d4f614ea047\") " pod="openshift-network-diagnostics/network-check-source-b4bf74f6-nnjv9" Mar 20 08:50:25.397556 master-0 kubenswrapper[27820]: I0320 08:50:25.397515 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"multus-admission-controller-5dbbb8b86f-vhrdf\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" Mar 20 08:50:25.418610 master-0 kubenswrapper[27820]: I0320 08:50:25.418514 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmph\" (UniqueName: \"kubernetes.io/projected/581a8be2-d16c-4fd8-b051-214bd60a2a91-kube-api-access-ssmph\") pod \"cloud-credential-operator-744f9dbf77-6mrwl\" (UID: \"581a8be2-d16c-4fd8-b051-214bd60a2a91\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-744f9dbf77-6mrwl" Mar 20 08:50:25.443605 master-0 kubenswrapper[27820]: I0320 08:50:25.443553 27820 request.go:700] Waited for 3.920772254s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-version/serviceaccounts/default/token Mar 20 08:50:25.450367 master-0 kubenswrapper[27820]: I0320 08:50:25.449694 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdsv9\" (UniqueName: \"kubernetes.io/projected/0e79950f-50a5-46ec-b836-7a35dcce2851-kube-api-access-rdsv9\") pod \"package-server-manager-7b95f86987-cgc9q\" (UID: \"0e79950f-50a5-46ec-b836-7a35dcce2851\") " pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:50:25.455780 master-0 kubenswrapper[27820]: I0320 08:50:25.455743 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bca4cc7c-839d-4877-b0aa-c07607fea404-kube-api-access\") pod \"cluster-version-operator-7d58488df-bzstx\" (UID: \"bca4cc7c-839d-4877-b0aa-c07607fea404\") " pod="openshift-cluster-version/cluster-version-operator-7d58488df-bzstx" Mar 20 08:50:25.460283 master-0 kubenswrapper[27820]: I0320 08:50:25.460236 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") pod \"74bebf0b-6727-4959-8239-a9389e630524\" (UID: \"74bebf0b-6727-4959-8239-a9389e630524\") " Mar 20 08:50:25.463137 master-0 kubenswrapper[27820]: I0320 08:50:25.463096 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb" (OuterVolumeSpecName: "kube-api-access-f92mb") pod "74bebf0b-6727-4959-8239-a9389e630524" (UID: "74bebf0b-6727-4959-8239-a9389e630524"). InnerVolumeSpecName "kube-api-access-f92mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:25.476652 master-0 kubenswrapper[27820]: I0320 08:50:25.476603 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpr8b\" (UniqueName: \"kubernetes.io/projected/3065e4b4-4493-41ce-b9d2-89315475f74f-kube-api-access-wpr8b\") pod \"openshift-config-operator-95bf4f4d-25jrp\" (UID: \"3065e4b4-4493-41ce-b9d2-89315475f74f\") " pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:25.499949 master-0 kubenswrapper[27820]: I0320 08:50:25.499896 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5wnd\" (UniqueName: \"kubernetes.io/projected/97ad1db7-0bf9-4faf-9fa5-0f3df7dab777-kube-api-access-w5wnd\") pod \"tuned-zgm52\" (UID: \"97ad1db7-0bf9-4faf-9fa5-0f3df7dab777\") " pod="openshift-cluster-node-tuning-operator/tuned-zgm52" Mar 20 08:50:25.519116 master-0 kubenswrapper[27820]: I0320 08:50:25.519076 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx99f\" (UniqueName: \"kubernetes.io/projected/b097596e-79e1-44d1-be8a-96340042a041-kube-api-access-dx99f\") pod \"iptables-alerter-9xlf2\" (UID: \"b097596e-79e1-44d1-be8a-96340042a041\") " pod="openshift-network-operator/iptables-alerter-9xlf2" Mar 20 08:50:25.539648 master-0 kubenswrapper[27820]: I0320 08:50:25.539599 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/65157a9b-3df7-4cc1-a85a-a5dfa59921ad-kube-api-access\") pod \"openshift-kube-scheduler-operator-dddff6458-vmwqt\" (UID: \"65157a9b-3df7-4cc1-a85a-a5dfa59921ad\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-dddff6458-vmwqt" Mar 20 08:50:25.560566 master-0 kubenswrapper[27820]: I0320 08:50:25.560500 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jmlf\" (UniqueName: \"kubernetes.io/projected/9d653bfa-7168-49fa-a838-aedb33c7e60f-kube-api-access-8jmlf\") pod \"network-node-identity-dq29v\" (UID: \"9d653bfa-7168-49fa-a838-aedb33c7e60f\") " pod="openshift-network-node-identity/network-node-identity-dq29v" Mar 20 08:50:25.562423 master-0 kubenswrapper[27820]: I0320 08:50:25.562375 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f92mb\" (UniqueName: \"kubernetes.io/projected/74bebf0b-6727-4959-8239-a9389e630524-kube-api-access-f92mb\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:25.576368 master-0 kubenswrapper[27820]: I0320 08:50:25.576302 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm9l9\" (UniqueName: \"kubernetes.io/projected/4f6c819a-5074-4d29-84c8-e187528ad757-kube-api-access-mm9l9\") pod \"certified-operators-clrp2\" (UID: \"4f6c819a-5074-4d29-84c8-e187528ad757\") " pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:25.584633 master-0 kubenswrapper[27820]: I0320 08:50:25.584587 27820 scope.go:117] "RemoveContainer" containerID="3aae9bbf36ed2a17618b0c45a78846786d8690aa98bce9f3e1b7b0243ecc47f2" Mar 20 08:50:25.603569 master-0 kubenswrapper[27820]: I0320 08:50:25.603526 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqmzh\" (UniqueName: \"kubernetes.io/projected/fec3170d-3f3e-42f5-b20a-da53721c0dac-kube-api-access-tqmzh\") pod \"etcd-operator-8544cbcf9c-7x9vq\" (UID: \"fec3170d-3f3e-42f5-b20a-da53721c0dac\") " pod="openshift-etcd-operator/etcd-operator-8544cbcf9c-7x9vq" Mar 20 08:50:25.620997 master-0 kubenswrapper[27820]: I0320 08:50:25.620950 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vm9c\" (UniqueName: \"kubernetes.io/projected/b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc-kube-api-access-4vm9c\") pod \"community-operators-chfj7\" (UID: \"b43744de-5bc3-4f1d-91a4-c54e2a3a7ffc\") " pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:25.633831 master-0 kubenswrapper[27820]: I0320 08:50:25.633702 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="70dc762c-e7fd-4b43-a1f6-c8cb3ff16144" Mar 20 08:50:25.633831 master-0 kubenswrapper[27820]: I0320 08:50:25.633825 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="70dc762c-e7fd-4b43-a1f6-c8cb3ff16144" Mar 20 08:50:25.680588 master-0 kubenswrapper[27820]: I0320 08:50:25.680554 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtt44\" (UniqueName: \"kubernetes.io/projected/123f1ecb-cc03-462b-b76f-7251bf69d3d6-kube-api-access-dtt44\") pod \"node-exporter-rzg98\" (UID: \"123f1ecb-cc03-462b-b76f-7251bf69d3d6\") " pod="openshift-monitoring/node-exporter-rzg98" Mar 20 08:50:25.680778 master-0 kubenswrapper[27820]: I0320 08:50:25.680732 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgffp\" (UniqueName: \"kubernetes.io/projected/80ddf0a4-e853-4de0-b540-81144dfdd31d-kube-api-access-pgffp\") pod \"machine-api-operator-6fbb6cf6f9-lr7tb\" (UID: \"80ddf0a4-e853-4de0-b540-81144dfdd31d\") " pod="openshift-machine-api/machine-api-operator-6fbb6cf6f9-lr7tb" Mar 20 08:50:25.680848 master-0 kubenswrapper[27820]: I0320 08:50:25.680771 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7gdm\" (UniqueName: \"kubernetes.io/projected/23003a2f-2053-47cc-8133-23eb886d4da0-kube-api-access-q7gdm\") pod \"marketplace-operator-89ccd998f-j84r8\" (UID: \"23003a2f-2053-47cc-8133-23eb886d4da0\") " pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:25.699892 master-0 kubenswrapper[27820]: I0320 08:50:25.699793 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/57189f7c-5987-457d-a299-0a6b9bcb3e24-bound-sa-token\") pod \"cluster-image-registry-operator-5549dc66cb-cg8qr\" (UID: \"57189f7c-5987-457d-a299-0a6b9bcb3e24\") " pod="openshift-image-registry/cluster-image-registry-operator-5549dc66cb-cg8qr" Mar 20 08:50:25.715630 master-0 kubenswrapper[27820]: I0320 08:50:25.715557 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b67hn\" (UniqueName: \"kubernetes.io/projected/00350ac7-b40a-4459-b94c-a37d7b613645-kube-api-access-b67hn\") pod \"network-metrics-daemon-nfrth\" (UID: \"00350ac7-b40a-4459-b94c-a37d7b613645\") " pod="openshift-multus/network-metrics-daemon-nfrth" Mar 20 08:50:25.747809 master-0 kubenswrapper[27820]: I0320 08:50:25.747751 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbtnq\" (UniqueName: \"kubernetes.io/projected/41253bde-5d09-4ff0-8e7c-4a21fe2b7106-kube-api-access-dbtnq\") pod \"dns-default-gskz6\" (UID: \"41253bde-5d09-4ff0-8e7c-4a21fe2b7106\") " pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:25.761037 master-0 kubenswrapper[27820]: I0320 08:50:25.760931 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v29ws\" (UniqueName: \"kubernetes.io/projected/0cb6d987-4b59-4fd9-889a-3250c12a726c-kube-api-access-v29ws\") pod \"packageserver-6f5545c99f-6sl9d\" (UID: \"0cb6d987-4b59-4fd9-889a-3250c12a726c\") " pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:25.781548 master-0 kubenswrapper[27820]: I0320 08:50:25.781500 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55l9j\" (UniqueName: \"kubernetes.io/projected/7ab32efc-7cc5-4e36-9c1c-05efb19914e2-kube-api-access-55l9j\") pod \"olm-operator-5c9796789-t926t\" (UID: \"7ab32efc-7cc5-4e36-9c1c-05efb19914e2\") " pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:50:25.795940 master-0 kubenswrapper[27820]: I0320 08:50:25.795899 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbzl9\" (UniqueName: \"kubernetes.io/projected/6163bd4b-dc83-4e83-8590-5ac4753bda1c-kube-api-access-zbzl9\") pod \"cluster-cloud-controller-manager-operator-7dff898856-vk98n\" (UID: \"6163bd4b-dc83-4e83-8590-5ac4753bda1c\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7dff898856-vk98n" Mar 20 08:50:25.816347 master-0 kubenswrapper[27820]: I0320 08:50:25.816291 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxqp4\" (UniqueName: \"kubernetes.io/projected/ca56e37d-80ea-432b-a6d9-f4e904a40e10-kube-api-access-jxqp4\") pod \"apiserver-64b65cddf5-gx7h7\" (UID: \"ca56e37d-80ea-432b-a6d9-f4e904a40e10\") " pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:25.836227 master-0 kubenswrapper[27820]: I0320 08:50:25.836176 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2faf85a2-29bb-4275-a12b-0ef1663a4f0d-kube-api-access\") pod \"kube-apiserver-operator-8b68b9d9b-xwkzx\" (UID: \"2faf85a2-29bb-4275-a12b-0ef1663a4f0d\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-8b68b9d9b-xwkzx" Mar 20 08:50:25.856737 master-0 kubenswrapper[27820]: I0320 08:50:25.856678 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pw6sv\" (UniqueName: \"kubernetes.io/projected/e89571b2-098c-495b-9b53-c4ebd95296ab-kube-api-access-pw6sv\") pod \"router-default-7dcf5569b5-kvmtp\" (UID: \"e89571b2-098c-495b-9b53-c4ebd95296ab\") " pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:25.878282 master-0 kubenswrapper[27820]: I0320 08:50:25.878220 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zf6h\" (UniqueName: \"kubernetes.io/projected/a88b1c81-02b5-4c85-9660-5f84c900a946-kube-api-access-5zf6h\") pod \"multus-admission-controller-58c9f8fc64-kr9hd\" (UID: \"a88b1c81-02b5-4c85-9660-5f84c900a946\") " pod="openshift-multus/multus-admission-controller-58c9f8fc64-kr9hd" Mar 20 08:50:25.898050 master-0 kubenswrapper[27820]: I0320 08:50:25.897972 27820 scope.go:117] "RemoveContainer" containerID="d70605680e08d7f319125bde3eeb41c693b146e24b422d7776788ac3b348829c" Mar 20 08:50:25.898939 master-0 kubenswrapper[27820]: I0320 08:50:25.898767 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") pod \"metrics-server-55d84d7794-56n4c\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:25.919702 master-0 kubenswrapper[27820]: I0320 08:50:25.919595 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") pod \"route-controller-manager-7d86cb9b59-smbxv\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:25.950026 master-0 kubenswrapper[27820]: I0320 08:50:25.948921 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") pod \"controller-manager-bc85986b9-8p79x\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:25.975573 master-0 kubenswrapper[27820]: I0320 08:50:25.975525 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfgfz\" (UniqueName: \"kubernetes.io/projected/d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0-kube-api-access-tfgfz\") pod \"openshift-state-metrics-5dc6c74576-qclrg\" (UID: \"d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0\") " pod="openshift-monitoring/openshift-state-metrics-5dc6c74576-qclrg" Mar 20 08:50:25.977137 master-0 kubenswrapper[27820]: E0320 08:50:25.977031 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:25.977137 master-0 kubenswrapper[27820]: E0320 08:50:25.977055 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:25.977137 master-0 kubenswrapper[27820]: E0320 08:50:25.977119 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:26.477104622 +0000 UTC m=+36.572313766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:25.998096 master-0 kubenswrapper[27820]: E0320 08:50:25.998042 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:25.998096 master-0 kubenswrapper[27820]: E0320 08:50:25.998082 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:25.998301 master-0 kubenswrapper[27820]: E0320 08:50:25.998137 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:50:26.498115547 +0000 UTC m=+36.593324691 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:26.028556 master-0 kubenswrapper[27820]: E0320 08:50:26.027504 27820 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.953s" Mar 20 08:50:26.028556 master-0 kubenswrapper[27820]: I0320 08:50:26.027560 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:26.028556 master-0 kubenswrapper[27820]: I0320 08:50:26.027579 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:26.041715 master-0 kubenswrapper[27820]: I0320 08:50:26.041666 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 20 08:50:26.050356 master-0 kubenswrapper[27820]: I0320 08:50:26.049645 27820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 20 08:50:26.080469 master-0 kubenswrapper[27820]: I0320 08:50:26.080418 27820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:26.086753 master-0 kubenswrapper[27820]: I0320 08:50:26.086705 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:26.086753 master-0 kubenswrapper[27820]: I0320 08:50:26.086750 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"b45ea2ef1cf2bc9d1d994d6538ae0a64","Type":"ContainerStarted","Data":"8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca"} Mar 20 08:50:26.086990 master-0 kubenswrapper[27820]: I0320 08:50:26.086771 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:26.086990 master-0 kubenswrapper[27820]: I0320 08:50:26.086787 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:26.086990 master-0 kubenswrapper[27820]: I0320 08:50:26.086797 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 20 08:50:26.086990 master-0 kubenswrapper[27820]: I0320 08:50:26.086808 27820 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="74e882e7-7513-46fa-a2e4-567779c5e860" Mar 20 08:50:26.094880 master-0 kubenswrapper[27820]: I0320 08:50:26.094736 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 20 08:50:26.174680 master-0 kubenswrapper[27820]: I0320 08:50:26.174634 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174699 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf" event={"ID":"74bebf0b-6727-4959-8239-a9389e630524","Type":"ContainerDied","Data":"b1a6bfe0069db4370471806f444b8cbb38ac33f0aab60a3239aafba8901aaf7e"} Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174740 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-69c6b55594-kh8bg" Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174755 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174767 27820 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="74e882e7-7513-46fa-a2e4-567779c5e860" Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174843 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174859 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:50:26.174899 master-0 kubenswrapper[27820]: I0320 08:50:26.174872 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:50:26.175136 master-0 kubenswrapper[27820]: I0320 08:50:26.174937 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:50:26.175136 master-0 kubenswrapper[27820]: I0320 08:50:26.174947 27820 scope.go:117] "RemoveContainer" containerID="c75547816c7beb0588174159cdcc45e5aaa905924c1e2a6b0d4ab73f71bb71c9" Mar 20 08:50:26.175136 master-0 kubenswrapper[27820]: I0320 08:50:26.175112 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:50:26.175136 master-0 kubenswrapper[27820]: I0320 08:50:26.175131 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175144 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"7d004c7866a0d2d626abc09d219b312fec3c6430f2d64295191492675914aa50"} Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175160 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175201 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175215 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad"} Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175295 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175311 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:50:26.175325 master-0 kubenswrapper[27820]: I0320 08:50:26.175329 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:26.175510 master-0 kubenswrapper[27820]: I0320 08:50:26.175347 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:26.175510 master-0 kubenswrapper[27820]: I0320 08:50:26.175358 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923"} Mar 20 08:50:26.177068 master-0 kubenswrapper[27820]: I0320 08:50:26.177030 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:26.177144 master-0 kubenswrapper[27820]: I0320 08:50:26.177132 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-6864dc98f7-tf2gj" Mar 20 08:50:26.177177 master-0 kubenswrapper[27820]: I0320 08:50:26.177149 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc"} Mar 20 08:50:26.177210 master-0 kubenswrapper[27820]: I0320 08:50:26.177187 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-57777556ff-rnnfz" Mar 20 08:50:26.177255 master-0 kubenswrapper[27820]: I0320 08:50:26.177215 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:26.177255 master-0 kubenswrapper[27820]: I0320 08:50:26.177225 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"6290d119083ea809be21ab579813fa286464b690250eaa07fa0794bcdde38d59"} Mar 20 08:50:26.177340 master-0 kubenswrapper[27820]: I0320 08:50:26.177288 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:26.177340 master-0 kubenswrapper[27820]: I0320 08:50:26.177322 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:50:26.177399 master-0 kubenswrapper[27820]: I0320 08:50:26.177342 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68f85b4d6c-hdw98" Mar 20 08:50:26.177399 master-0 kubenswrapper[27820]: I0320 08:50:26.177352 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:26.177399 master-0 kubenswrapper[27820]: I0320 08:50:26.177364 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:26.177399 master-0 kubenswrapper[27820]: I0320 08:50:26.177384 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177407 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177426 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177442 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-j9jjm" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177461 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177483 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177501 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-7b95f86987-cgc9q" Mar 20 08:50:26.177516 master-0 kubenswrapper[27820]: I0320 08:50:26.177512 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177522 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177530 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177546 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-95bf4f4d-25jrp" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177557 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177575 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177584 27820 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177604 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177670 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:26.177699 master-0 kubenswrapper[27820]: I0320 08:50:26.177694 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177704 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177724 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177746 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177762 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177781 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177799 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:26.177938 master-0 kubenswrapper[27820]: I0320 08:50:26.177836 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:26.181913 master-0 kubenswrapper[27820]: I0320 08:50:26.181065 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-gskz6" Mar 20 08:50:26.182194 master-0 kubenswrapper[27820]: I0320 08:50:26.182162 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:50:26.184768 master-0 kubenswrapper[27820]: I0320 08:50:26.184710 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5c9796789-t926t" Mar 20 08:50:26.185954 master-0 kubenswrapper[27820]: I0320 08:50:26.185919 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:50:26.186584 master-0 kubenswrapper[27820]: I0320 08:50:26.186565 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:26.186831 master-0 kubenswrapper[27820]: I0320 08:50:26.186799 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:26.205605 master-0 kubenswrapper[27820]: I0320 08:50:26.198908 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-6f5545c99f-6sl9d" Mar 20 08:50:26.205605 master-0 kubenswrapper[27820]: I0320 08:50:26.201076 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-89ccd998f-j84r8" Mar 20 08:50:26.205605 master-0 kubenswrapper[27820]: I0320 08:50:26.203806 27820 scope.go:117] "RemoveContainer" containerID="46a769eaa885d6f2aee7986a052f5cb914f5503a0051214e8b4e113fe0f1651a" Mar 20 08:50:26.219769 master-0 kubenswrapper[27820]: I0320 08:50:26.219719 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:26.219947 master-0 kubenswrapper[27820]: I0320 08:50:26.219830 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-bvndl" Mar 20 08:50:26.234990 master-0 kubenswrapper[27820]: I0320 08:50:26.233627 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:26.258303 master-0 kubenswrapper[27820]: I0320 08:50:26.254674 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:26.481348 master-0 kubenswrapper[27820]: I0320 08:50:26.481283 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:26.481857 master-0 kubenswrapper[27820]: E0320 08:50:26.481507 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:26.481857 master-0 kubenswrapper[27820]: E0320 08:50:26.481556 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:26.481857 master-0 kubenswrapper[27820]: E0320 08:50:26.481621 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:27.48160208 +0000 UTC m=+37.576811224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:26.582254 master-0 kubenswrapper[27820]: I0320 08:50:26.582174 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:26.582560 master-0 kubenswrapper[27820]: E0320 08:50:26.582379 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:26.582560 master-0 kubenswrapper[27820]: E0320 08:50:26.582400 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:26.582560 master-0 kubenswrapper[27820]: E0320 08:50:26.582444 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:50:27.582430625 +0000 UTC m=+37.677639769 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:26.628979 master-0 kubenswrapper[27820]: I0320 08:50:26.628932 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/4.log" Mar 20 08:50:26.629527 master-0 kubenswrapper[27820]: I0320 08:50:26.629459 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-66b84d69b-dknxr" event={"ID":"22f85e98-eb36-46b2-ab5d-7c21e060cba5","Type":"ContainerStarted","Data":"cdefc8a172432d8fa0f3c7d167392f08439203ced12700667eb77840f7e3ad8f"} Mar 20 08:50:26.633290 master-0 kubenswrapper[27820]: I0320 08:50:26.633096 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" event={"ID":"e89571b2-098c-495b-9b53-c4ebd95296ab","Type":"ContainerStarted","Data":"a72cd3e37d68c7cb5b0bfb33d021cc1cbaecda7c5dc814e76bc18a8f265e93c6"} Mar 20 08:50:26.637994 master-0 kubenswrapper[27820]: I0320 08:50:26.637943 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:50:26.637994 master-0 kubenswrapper[27820]: I0320 08:50:26.637975 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="c17bc281-675f-4a69-ba8a-a2476e95c8c8" Mar 20 08:50:26.755771 master-0 kubenswrapper[27820]: I0320 08:50:26.755652 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:26.898525 master-0 kubenswrapper[27820]: I0320 08:50:26.898467 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:26.903089 master-0 kubenswrapper[27820]: I0320 08:50:26.902966 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:27.276235 master-0 kubenswrapper[27820]: I0320 08:50:27.276139 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=6.276110076 podStartE2EDuration="6.276110076s" podCreationTimestamp="2026-03-20 08:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:27.274485262 +0000 UTC m=+37.369694436" watchObservedRunningTime="2026-03-20 08:50:27.276110076 +0000 UTC m=+37.371319260" Mar 20 08:50:27.501825 master-0 kubenswrapper[27820]: I0320 08:50:27.501708 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:27.502717 master-0 kubenswrapper[27820]: E0320 08:50:27.501952 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:27.502717 master-0 kubenswrapper[27820]: E0320 08:50:27.502008 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:27.502717 master-0 kubenswrapper[27820]: E0320 08:50:27.502100 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:29.502072584 +0000 UTC m=+39.597281768 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:27.602854 master-0 kubenswrapper[27820]: I0320 08:50:27.602806 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:27.603251 master-0 kubenswrapper[27820]: E0320 08:50:27.603220 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:27.603368 master-0 kubenswrapper[27820]: E0320 08:50:27.603354 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:27.603487 master-0 kubenswrapper[27820]: E0320 08:50:27.603475 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:50:29.603457735 +0000 UTC m=+39.698666879 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:27.644075 master-0 kubenswrapper[27820]: I0320 08:50:27.644003 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:27.644284 master-0 kubenswrapper[27820]: I0320 08:50:27.644249 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-7dcf5569b5-kvmtp" Mar 20 08:50:27.956219 master-0 kubenswrapper[27820]: I0320 08:50:27.956089 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xj8x6"] Mar 20 08:50:27.965868 master-0 kubenswrapper[27820]: I0320 08:50:27.965810 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-xj8x6"] Mar 20 08:50:28.092792 master-0 kubenswrapper[27820]: I0320 08:50:28.092452 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" path="/var/lib/kubelet/pods/45b3c788-eb83-448a-bc60-90b8ace28382/volumes" Mar 20 08:50:28.221575 master-0 kubenswrapper[27820]: I0320 08:50:28.220879 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=2.220842004 podStartE2EDuration="2.220842004s" podCreationTimestamp="2026-03-20 08:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:28.216163903 +0000 UTC m=+38.311373087" watchObservedRunningTime="2026-03-20 08:50:28.220842004 +0000 UTC m=+38.316051168" Mar 20 08:50:28.420758 master-0 kubenswrapper[27820]: I0320 08:50:28.420658 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=7.420638373 podStartE2EDuration="7.420638373s" podCreationTimestamp="2026-03-20 08:50:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:28.419684107 +0000 UTC m=+38.514893261" watchObservedRunningTime="2026-03-20 08:50:28.420638373 +0000 UTC m=+38.515847517" Mar 20 08:50:29.259234 master-0 kubenswrapper[27820]: I0320 08:50:29.259152 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.2591369439999998 podStartE2EDuration="3.259136944s" podCreationTimestamp="2026-03-20 08:50:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:29.25717561 +0000 UTC m=+39.352384784" watchObservedRunningTime="2026-03-20 08:50:29.259136944 +0000 UTC m=+39.354346088" Mar 20 08:50:29.452499 master-0 kubenswrapper[27820]: I0320 08:50:29.452436 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf"] Mar 20 08:50:29.456889 master-0 kubenswrapper[27820]: I0320 08:50:29.456836 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-5dbbb8b86f-vhrdf"] Mar 20 08:50:29.534590 master-0 kubenswrapper[27820]: I0320 08:50:29.534413 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:29.534799 master-0 kubenswrapper[27820]: E0320 08:50:29.534702 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:29.534799 master-0 kubenswrapper[27820]: E0320 08:50:29.534750 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:29.534965 master-0 kubenswrapper[27820]: E0320 08:50:29.534836 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:33.534808715 +0000 UTC m=+43.630017889 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:29.636204 master-0 kubenswrapper[27820]: I0320 08:50:29.636118 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:29.636460 master-0 kubenswrapper[27820]: E0320 08:50:29.636414 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:29.636460 master-0 kubenswrapper[27820]: E0320 08:50:29.636455 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:29.636541 master-0 kubenswrapper[27820]: E0320 08:50:29.636523 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:50:33.636503544 +0000 UTC m=+43.731712698 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:30.007556 master-0 kubenswrapper[27820]: I0320 08:50:30.007470 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-5595498c49-hrfrr" Mar 20 08:50:30.091835 master-0 kubenswrapper[27820]: I0320 08:50:30.091720 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74bebf0b-6727-4959-8239-a9389e630524" path="/var/lib/kubelet/pods/74bebf0b-6727-4959-8239-a9389e630524/volumes" Mar 20 08:50:30.905919 master-0 kubenswrapper[27820]: I0320 08:50:30.905836 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-64b65cddf5-gx7h7" Mar 20 08:50:31.756188 master-0 kubenswrapper[27820]: I0320 08:50:31.756135 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 20 08:50:31.832620 master-0 kubenswrapper[27820]: I0320 08:50:31.832571 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 20 08:50:33.593800 master-0 kubenswrapper[27820]: I0320 08:50:33.593745 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:33.594495 master-0 kubenswrapper[27820]: E0320 08:50:33.593932 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:33.594574 master-0 kubenswrapper[27820]: E0320 08:50:33.594501 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:33.594622 master-0 kubenswrapper[27820]: E0320 08:50:33.594572 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:41.594550168 +0000 UTC m=+51.689759312 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:33.696031 master-0 kubenswrapper[27820]: I0320 08:50:33.695940 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:33.696323 master-0 kubenswrapper[27820]: E0320 08:50:33.696289 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:33.696396 master-0 kubenswrapper[27820]: E0320 08:50:33.696327 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:33.696455 master-0 kubenswrapper[27820]: E0320 08:50:33.696410 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:50:41.696385772 +0000 UTC m=+51.791594956 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:34.473857 master-0 kubenswrapper[27820]: I0320 08:50:34.473776 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.473857 master-0 kubenswrapper[27820]: I0320 08:50:34.473867 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.474219 master-0 kubenswrapper[27820]: I0320 08:50:34.473888 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.474219 master-0 kubenswrapper[27820]: I0320 08:50:34.473909 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.479316 master-0 kubenswrapper[27820]: I0320 08:50:34.478974 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.481429 master-0 kubenswrapper[27820]: I0320 08:50:34.481374 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.691217 master-0 kubenswrapper[27820]: I0320 08:50:34.691166 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.692895 master-0 kubenswrapper[27820]: I0320 08:50:34.692865 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:50:34.750755 master-0 kubenswrapper[27820]: I0320 08:50:34.750647 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hj5tl" Mar 20 08:50:35.052984 master-0 kubenswrapper[27820]: I0320 08:50:35.052892 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bt7wn" Mar 20 08:50:35.663485 master-0 kubenswrapper[27820]: I0320 08:50:35.663411 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-clrp2" Mar 20 08:50:35.957294 master-0 kubenswrapper[27820]: I0320 08:50:35.957139 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-chfj7" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037118 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-77688d7687-k82zc"] Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037408 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9775cc27-53b9-4d21-a98b-84b39ada32ee" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037420 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9775cc27-53b9-4d21-a98b-84b39ada32ee" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037431 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037437 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037451 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" containerName="kube-multus-additional-cni-plugins" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037457 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" containerName="kube-multus-additional-cni-plugins" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037465 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037471 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037484 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037490 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037498 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75cef5aa-93e6-4b8b-9ab1-06809e85883a" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037503 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="75cef5aa-93e6-4b8b-9ab1-06809e85883a" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037513 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="521086da-d513-4475-8db5-098ab9838df1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037519 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="521086da-d513-4475-8db5-098ab9838df1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037530 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037538 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037557 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037565 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037574 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92600726-933f-41eb-a329-1fcc68dc95c1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037581 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="92600726-933f-41eb-a329-1fcc68dc95c1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037590 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="multus-admission-controller" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037598 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="multus-admission-controller" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037613 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037620 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037633 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="kube-rbac-proxy" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037640 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="kube-rbac-proxy" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037650 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cce21ae1-63de-49be-a027-084a101e650b" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037657 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cce21ae1-63de-49be-a027-084a101e650b" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037671 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26923e70-56a5-4020-8b55-510879ec6fd4" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037677 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="26923e70-56a5-4020-8b55-510879ec6fd4" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: E0320 08:50:41.037687 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037694 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037798 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="26923e70-56a5-4020-8b55-510879ec6fd4" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037811 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cce21ae1-63de-49be-a027-084a101e650b" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037820 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a25b643-c08d-462f-80f4-8a4feb1e26e8" containerName="assisted-installer-controller" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037834 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="multus-admission-controller" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037866 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea52b89-46f9-4685-aecd-162ba92baaf5" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037876 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="521086da-d513-4475-8db5-098ab9838df1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037883 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5cdd5ac8-4c2e-4680-b697-0e5d94136fe4" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037895 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="169353ee-c927-4483-8976-b9ca08b0a6d1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037911 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b1b51a-cbfa-42de-9fb8-315e9cb76b58" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037922 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9775cc27-53b9-4d21-a98b-84b39ada32ee" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037934 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="fae0c983-2cb4-4749-97ff-a718a9fb6563" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037946 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c83737980b9ee109184b1d78e942cf36" containerName="kube-scheduler" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037956 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="92600726-933f-41eb-a329-1fcc68dc95c1" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037963 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="74bebf0b-6727-4959-8239-a9389e630524" containerName="kube-rbac-proxy" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037970 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="75cef5aa-93e6-4b8b-9ab1-06809e85883a" containerName="installer" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.037982 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="45b3c788-eb83-448a-bc60-90b8ace28382" containerName="kube-multus-additional-cni-plugins" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.038416 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.039846 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-xzw6l"] Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.040611 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.041495 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 20 08:50:41.042025 master-0 kubenswrapper[27820]: I0320 08:50:41.041698 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 08:50:41.044364 master-0 kubenswrapper[27820]: I0320 08:50:41.042328 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-cxlgs" Mar 20 08:50:41.044364 master-0 kubenswrapper[27820]: I0320 08:50:41.042981 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-ghpsv" Mar 20 08:50:41.060713 master-0 kubenswrapper[27820]: I0320 08:50:41.060665 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-77688d7687-k82zc"] Mar 20 08:50:41.098929 master-0 kubenswrapper[27820]: I0320 08:50:41.098884 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35544f3-7959-401e-81c1-05b4f29551d7-host\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.098929 master-0 kubenswrapper[27820]: I0320 08:50:41.098939 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/124cba9b-2b7c-4e54-9061-a6949d168655-monitoring-plugin-cert\") pod \"monitoring-plugin-77688d7687-k82zc\" (UID: \"124cba9b-2b7c-4e54-9061-a6949d168655\") " pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:41.098929 master-0 kubenswrapper[27820]: I0320 08:50:41.098956 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c35544f3-7959-401e-81c1-05b4f29551d7-serviceca\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.099343 master-0 kubenswrapper[27820]: I0320 08:50:41.098994 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4gh\" (UniqueName: \"kubernetes.io/projected/c35544f3-7959-401e-81c1-05b4f29551d7-kube-api-access-ww4gh\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.200364 master-0 kubenswrapper[27820]: I0320 08:50:41.200313 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35544f3-7959-401e-81c1-05b4f29551d7-host\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.200603 master-0 kubenswrapper[27820]: I0320 08:50:41.200374 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/124cba9b-2b7c-4e54-9061-a6949d168655-monitoring-plugin-cert\") pod \"monitoring-plugin-77688d7687-k82zc\" (UID: \"124cba9b-2b7c-4e54-9061-a6949d168655\") " pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:41.200603 master-0 kubenswrapper[27820]: I0320 08:50:41.200424 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c35544f3-7959-401e-81c1-05b4f29551d7-host\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.200603 master-0 kubenswrapper[27820]: I0320 08:50:41.200468 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c35544f3-7959-401e-81c1-05b4f29551d7-serviceca\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.200603 master-0 kubenswrapper[27820]: I0320 08:50:41.200512 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4gh\" (UniqueName: \"kubernetes.io/projected/c35544f3-7959-401e-81c1-05b4f29551d7-kube-api-access-ww4gh\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.200874 master-0 kubenswrapper[27820]: I0320 08:50:41.200845 27820 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 08:50:41.201296 master-0 kubenswrapper[27820]: I0320 08:50:41.201157 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/c35544f3-7959-401e-81c1-05b4f29551d7-serviceca\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.204995 master-0 kubenswrapper[27820]: I0320 08:50:41.204962 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/124cba9b-2b7c-4e54-9061-a6949d168655-monitoring-plugin-cert\") pod \"monitoring-plugin-77688d7687-k82zc\" (UID: \"124cba9b-2b7c-4e54-9061-a6949d168655\") " pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:41.218583 master-0 kubenswrapper[27820]: I0320 08:50:41.218544 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4gh\" (UniqueName: \"kubernetes.io/projected/c35544f3-7959-401e-81c1-05b4f29551d7-kube-api-access-ww4gh\") pod \"node-ca-xzw6l\" (UID: \"c35544f3-7959-401e-81c1-05b4f29551d7\") " pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.363198 master-0 kubenswrapper[27820]: I0320 08:50:41.363132 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:41.387052 master-0 kubenswrapper[27820]: I0320 08:50:41.387011 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-xzw6l" Mar 20 08:50:41.419959 master-0 kubenswrapper[27820]: W0320 08:50:41.419907 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35544f3_7959_401e_81c1_05b4f29551d7.slice/crio-78c3fff177cf6d6d6a3e75ab3ffab1af8a8c5c415d0f4f889f0c8b28ff7bf219 WatchSource:0}: Error finding container 78c3fff177cf6d6d6a3e75ab3ffab1af8a8c5c415d0f4f889f0c8b28ff7bf219: Status 404 returned error can't find the container with id 78c3fff177cf6d6d6a3e75ab3ffab1af8a8c5c415d0f4f889f0c8b28ff7bf219 Mar 20 08:50:41.431415 master-0 kubenswrapper[27820]: I0320 08:50:41.431202 27820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:50:41.608375 master-0 kubenswrapper[27820]: I0320 08:50:41.607742 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:41.608375 master-0 kubenswrapper[27820]: E0320 08:50:41.607974 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:41.608375 master-0 kubenswrapper[27820]: E0320 08:50:41.608011 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:41.608375 master-0 kubenswrapper[27820]: E0320 08:50:41.608068 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:50:57.608050787 +0000 UTC m=+67.703259931 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:41.711297 master-0 kubenswrapper[27820]: I0320 08:50:41.711145 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:41.711493 master-0 kubenswrapper[27820]: E0320 08:50:41.711445 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:41.711493 master-0 kubenswrapper[27820]: E0320 08:50:41.711478 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:41.711620 master-0 kubenswrapper[27820]: E0320 08:50:41.711551 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:50:57.711528236 +0000 UTC m=+67.806737410 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:41.733293 master-0 kubenswrapper[27820]: I0320 08:50:41.733194 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xzw6l" event={"ID":"c35544f3-7959-401e-81c1-05b4f29551d7","Type":"ContainerStarted","Data":"78c3fff177cf6d6d6a3e75ab3ffab1af8a8c5c415d0f4f889f0c8b28ff7bf219"} Mar 20 08:50:41.759755 master-0 kubenswrapper[27820]: I0320 08:50:41.759675 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:50:41.824436 master-0 kubenswrapper[27820]: I0320 08:50:41.824338 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-6c44cff755-w2zcd"] Mar 20 08:50:41.826033 master-0 kubenswrapper[27820]: I0320 08:50:41.825811 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.834817 master-0 kubenswrapper[27820]: I0320 08:50:41.834465 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 08:50:41.835703 master-0 kubenswrapper[27820]: I0320 08:50:41.835647 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 08:50:41.835852 master-0 kubenswrapper[27820]: I0320 08:50:41.835825 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 08:50:41.835921 master-0 kubenswrapper[27820]: I0320 08:50:41.835856 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6n6jl" Mar 20 08:50:41.836513 master-0 kubenswrapper[27820]: I0320 08:50:41.836324 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.840392 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.840430 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.840482 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.840444 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.840649 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.840720 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 08:50:41.843297 master-0 kubenswrapper[27820]: I0320 08:50:41.841934 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 08:50:41.853914 master-0 kubenswrapper[27820]: I0320 08:50:41.853814 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 08:50:41.865567 master-0 kubenswrapper[27820]: I0320 08:50:41.865465 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c44cff755-w2zcd"] Mar 20 08:50:41.877720 master-0 kubenswrapper[27820]: I0320 08:50:41.877685 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-77688d7687-k82zc"] Mar 20 08:50:41.880636 master-0 kubenswrapper[27820]: I0320 08:50:41.880598 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 08:50:41.887430 master-0 kubenswrapper[27820]: W0320 08:50:41.884005 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod124cba9b_2b7c_4e54_9061_a6949d168655.slice/crio-562e4b87ccd48877051b1a9337c03aee738f424df1060c5547a09290f7693c9f WatchSource:0}: Error finding container 562e4b87ccd48877051b1a9337c03aee738f424df1060c5547a09290f7693c9f: Status 404 returned error can't find the container with id 562e4b87ccd48877051b1a9337c03aee738f424df1060c5547a09290f7693c9f Mar 20 08:50:41.915181 master-0 kubenswrapper[27820]: I0320 08:50:41.915133 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-session\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915299 master-0 kubenswrapper[27820]: I0320 08:50:41.915196 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915299 master-0 kubenswrapper[27820]: I0320 08:50:41.915230 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915299 master-0 kubenswrapper[27820]: I0320 08:50:41.915273 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915393 master-0 kubenswrapper[27820]: I0320 08:50:41.915318 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915393 master-0 kubenswrapper[27820]: I0320 08:50:41.915347 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-policies\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915393 master-0 kubenswrapper[27820]: I0320 08:50:41.915381 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-login\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915481 master-0 kubenswrapper[27820]: I0320 08:50:41.915409 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915481 master-0 kubenswrapper[27820]: I0320 08:50:41.915454 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mw4\" (UniqueName: \"kubernetes.io/projected/983d4dc3-94ae-4eee-b315-e4c2bff97d27-kube-api-access-n6mw4\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915539 master-0 kubenswrapper[27820]: I0320 08:50:41.915504 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915605 master-0 kubenswrapper[27820]: I0320 08:50:41.915540 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-error\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.915667 master-0 kubenswrapper[27820]: I0320 08:50:41.915635 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-dir\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:41.916008 master-0 kubenswrapper[27820]: I0320 08:50:41.915983 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017426 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-error\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017498 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-dir\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017522 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017555 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-session\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017575 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017607 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017628 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017659 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017679 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-policies\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017704 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-login\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017722 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017738 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mw4\" (UniqueName: \"kubernetes.io/projected/983d4dc3-94ae-4eee-b315-e4c2bff97d27-kube-api-access-n6mw4\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018350 master-0 kubenswrapper[27820]: I0320 08:50:42.017756 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.018870 master-0 kubenswrapper[27820]: E0320 08:50:42.018387 27820 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:42.018870 master-0 kubenswrapper[27820]: E0320 08:50:42.018439 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig podName:983d4dc3-94ae-4eee-b315-e4c2bff97d27 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:42.518425128 +0000 UTC m=+52.613634272 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig") pod "oauth-openshift-6c44cff755-w2zcd" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27") : configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:42.018870 master-0 kubenswrapper[27820]: I0320 08:50:42.018818 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-policies\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.020307 master-0 kubenswrapper[27820]: I0320 08:50:42.019214 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-service-ca\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.024369 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.024449 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-dir\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.024503 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-error\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.028693 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-login\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.028933 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.029070 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.030312 master-0 kubenswrapper[27820]: I0320 08:50:42.029339 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-router-certs\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.031190 master-0 kubenswrapper[27820]: I0320 08:50:42.031161 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-session\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.036280 master-0 kubenswrapper[27820]: I0320 08:50:42.036214 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.044675 master-0 kubenswrapper[27820]: I0320 08:50:42.044637 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mw4\" (UniqueName: \"kubernetes.io/projected/983d4dc3-94ae-4eee-b315-e4c2bff97d27-kube-api-access-n6mw4\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.526147 master-0 kubenswrapper[27820]: I0320 08:50:42.526024 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:42.526486 master-0 kubenswrapper[27820]: E0320 08:50:42.526427 27820 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:42.526576 master-0 kubenswrapper[27820]: E0320 08:50:42.526491 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig podName:983d4dc3-94ae-4eee-b315-e4c2bff97d27 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:43.526473539 +0000 UTC m=+53.621682683 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig") pod "oauth-openshift-6c44cff755-w2zcd" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27") : configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:42.740507 master-0 kubenswrapper[27820]: I0320 08:50:42.740445 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" event={"ID":"124cba9b-2b7c-4e54-9061-a6949d168655","Type":"ContainerStarted","Data":"562e4b87ccd48877051b1a9337c03aee738f424df1060c5547a09290f7693c9f"} Mar 20 08:50:43.543981 master-0 kubenswrapper[27820]: I0320 08:50:43.543849 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:43.544649 master-0 kubenswrapper[27820]: E0320 08:50:43.544486 27820 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:43.544702 master-0 kubenswrapper[27820]: E0320 08:50:43.544673 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig podName:983d4dc3-94ae-4eee-b315-e4c2bff97d27 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:45.544645378 +0000 UTC m=+55.639854562 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig") pod "oauth-openshift-6c44cff755-w2zcd" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27") : configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:43.885038 master-0 kubenswrapper[27820]: I0320 08:50:43.884902 27820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:50:43.885396 master-0 kubenswrapper[27820]: I0320 08:50:43.885231 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" containerID="cri-o://fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0" gracePeriod=5 Mar 20 08:50:44.756677 master-0 kubenswrapper[27820]: I0320 08:50:44.756520 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" event={"ID":"124cba9b-2b7c-4e54-9061-a6949d168655","Type":"ContainerStarted","Data":"766954f5bf8d2d0881554eabd4bab581358df6d82d1dfc6031a8ceb3414ce865"} Mar 20 08:50:44.757653 master-0 kubenswrapper[27820]: I0320 08:50:44.756803 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:44.760423 master-0 kubenswrapper[27820]: I0320 08:50:44.760114 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-xzw6l" event={"ID":"c35544f3-7959-401e-81c1-05b4f29551d7","Type":"ContainerStarted","Data":"9032a7db77ba5682dbcf2b6661a8f6893fca1d2fed4fe3f1f48f09f5a337c5ec"} Mar 20 08:50:44.771938 master-0 kubenswrapper[27820]: I0320 08:50:44.771868 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" Mar 20 08:50:44.777045 master-0 kubenswrapper[27820]: I0320 08:50:44.776974 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-77688d7687-k82zc" podStartSLOduration=1.473835485 podStartE2EDuration="3.776960642s" podCreationTimestamp="2026-03-20 08:50:41 +0000 UTC" firstStartedPulling="2026-03-20 08:50:41.888328653 +0000 UTC m=+51.983537817" lastFinishedPulling="2026-03-20 08:50:44.19145383 +0000 UTC m=+54.286662974" observedRunningTime="2026-03-20 08:50:44.775057619 +0000 UTC m=+54.870266783" watchObservedRunningTime="2026-03-20 08:50:44.776960642 +0000 UTC m=+54.872169776" Mar 20 08:50:44.812885 master-0 kubenswrapper[27820]: I0320 08:50:44.812792 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-xzw6l" podStartSLOduration=1.043278175 podStartE2EDuration="3.812770857s" podCreationTimestamp="2026-03-20 08:50:41 +0000 UTC" firstStartedPulling="2026-03-20 08:50:41.431115257 +0000 UTC m=+51.526324401" lastFinishedPulling="2026-03-20 08:50:44.200607939 +0000 UTC m=+54.295817083" observedRunningTime="2026-03-20 08:50:44.793969175 +0000 UTC m=+54.889178329" watchObservedRunningTime="2026-03-20 08:50:44.812770857 +0000 UTC m=+54.907980011" Mar 20 08:50:45.580160 master-0 kubenswrapper[27820]: I0320 08:50:45.580097 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:45.580383 master-0 kubenswrapper[27820]: E0320 08:50:45.580329 27820 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:45.580430 master-0 kubenswrapper[27820]: E0320 08:50:45.580387 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig podName:983d4dc3-94ae-4eee-b315-e4c2bff97d27 nodeName:}" failed. No retries permitted until 2026-03-20 08:50:49.580370479 +0000 UTC m=+59.675579623 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig") pod "oauth-openshift-6c44cff755-w2zcd" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27") : configmap "v4-0-config-system-cliconfig" not found Mar 20 08:50:45.960561 master-0 kubenswrapper[27820]: I0320 08:50:45.960502 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:50:49.508458 master-0 kubenswrapper[27820]: I0320 08:50:49.508399 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 20 08:50:49.509353 master-0 kubenswrapper[27820]: I0320 08:50:49.508488 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:49.646727 master-0 kubenswrapper[27820]: I0320 08:50:49.646545 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 20 08:50:49.646727 master-0 kubenswrapper[27820]: I0320 08:50:49.646690 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock" (OuterVolumeSpecName: "var-lock") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:49.647115 master-0 kubenswrapper[27820]: I0320 08:50:49.646810 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 20 08:50:49.647115 master-0 kubenswrapper[27820]: I0320 08:50:49.647099 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 20 08:50:49.647331 master-0 kubenswrapper[27820]: I0320 08:50:49.647204 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 20 08:50:49.647468 master-0 kubenswrapper[27820]: I0320 08:50:49.647345 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:49.647468 master-0 kubenswrapper[27820]: I0320 08:50:49.647372 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") pod \"8e7a82869988463543d3d8dd1f0b5fe3\" (UID: \"8e7a82869988463543d3d8dd1f0b5fe3\") " Mar 20 08:50:49.647468 master-0 kubenswrapper[27820]: I0320 08:50:49.647445 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log" (OuterVolumeSpecName: "var-log") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:49.647759 master-0 kubenswrapper[27820]: I0320 08:50:49.647596 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests" (OuterVolumeSpecName: "manifests") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:49.649577 master-0 kubenswrapper[27820]: I0320 08:50:49.649505 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:49.650064 master-0 kubenswrapper[27820]: I0320 08:50:49.650019 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:49.650064 master-0 kubenswrapper[27820]: I0320 08:50:49.650054 27820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:49.650064 master-0 kubenswrapper[27820]: I0320 08:50:49.650069 27820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-var-log\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:49.650495 master-0 kubenswrapper[27820]: I0320 08:50:49.650080 27820 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-manifests\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:49.650815 master-0 kubenswrapper[27820]: I0320 08:50:49.650767 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6c44cff755-w2zcd\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:49.654619 master-0 kubenswrapper[27820]: I0320 08:50:49.654557 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "8e7a82869988463543d3d8dd1f0b5fe3" (UID: "8e7a82869988463543d3d8dd1f0b5fe3"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:50:49.684769 master-0 kubenswrapper[27820]: I0320 08:50:49.684685 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:49.751101 master-0 kubenswrapper[27820]: I0320 08:50:49.751050 27820 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e7a82869988463543d3d8dd1f0b5fe3-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:50:49.797697 master-0 kubenswrapper[27820]: I0320 08:50:49.797649 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_8e7a82869988463543d3d8dd1f0b5fe3/startup-monitor/0.log" Mar 20 08:50:49.798505 master-0 kubenswrapper[27820]: I0320 08:50:49.797701 27820 generic.go:334] "Generic (PLEG): container finished" podID="8e7a82869988463543d3d8dd1f0b5fe3" containerID="fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0" exitCode=137 Mar 20 08:50:49.798505 master-0 kubenswrapper[27820]: I0320 08:50:49.797749 27820 scope.go:117] "RemoveContainer" containerID="fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0" Mar 20 08:50:49.798505 master-0 kubenswrapper[27820]: I0320 08:50:49.797861 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:50:49.856554 master-0 kubenswrapper[27820]: I0320 08:50:49.854939 27820 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="bf28843c-fb99-47e6-9d26-ca33c9414a20" Mar 20 08:50:49.858229 master-0 kubenswrapper[27820]: I0320 08:50:49.858172 27820 scope.go:117] "RemoveContainer" containerID="fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0" Mar 20 08:50:49.860385 master-0 kubenswrapper[27820]: E0320 08:50:49.860052 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0\": container with ID starting with fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0 not found: ID does not exist" containerID="fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0" Mar 20 08:50:49.860385 master-0 kubenswrapper[27820]: I0320 08:50:49.860084 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0"} err="failed to get container status \"fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0\": rpc error: code = NotFound desc = could not find container \"fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0\": container with ID starting with fb6b347301f6182f233869606006eb4e9e5926eb68b32a1efb774d5c4156bcb0 not found: ID does not exist" Mar 20 08:50:50.048186 master-0 kubenswrapper[27820]: I0320 08:50:50.048072 27820 scope.go:117] "RemoveContainer" containerID="44e6488658001ec197750deb888ad4cc53ef741359268344dae6149df1e9b900" Mar 20 08:50:50.084691 master-0 kubenswrapper[27820]: I0320 08:50:50.084633 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7a82869988463543d3d8dd1f0b5fe3" path="/var/lib/kubelet/pods/8e7a82869988463543d3d8dd1f0b5fe3/volumes" Mar 20 08:50:50.085033 master-0 kubenswrapper[27820]: I0320 08:50:50.084984 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 20 08:50:50.107195 master-0 kubenswrapper[27820]: I0320 08:50:50.107128 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:50:50.107380 master-0 kubenswrapper[27820]: I0320 08:50:50.107189 27820 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="bf28843c-fb99-47e6-9d26-ca33c9414a20" Mar 20 08:50:50.122899 master-0 kubenswrapper[27820]: I0320 08:50:50.121658 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:50:50.122899 master-0 kubenswrapper[27820]: I0320 08:50:50.121708 27820 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="bf28843c-fb99-47e6-9d26-ca33c9414a20" Mar 20 08:50:50.198762 master-0 kubenswrapper[27820]: I0320 08:50:50.198716 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6c44cff755-w2zcd"] Mar 20 08:50:50.207614 master-0 kubenswrapper[27820]: W0320 08:50:50.207590 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod983d4dc3_94ae_4eee_b315_e4c2bff97d27.slice/crio-05b5e7365d3ad2348a6afbe812370e74fd97fe6ebaae891cb864358483ddcdbb WatchSource:0}: Error finding container 05b5e7365d3ad2348a6afbe812370e74fd97fe6ebaae891cb864358483ddcdbb: Status 404 returned error can't find the container with id 05b5e7365d3ad2348a6afbe812370e74fd97fe6ebaae891cb864358483ddcdbb Mar 20 08:50:50.811067 master-0 kubenswrapper[27820]: I0320 08:50:50.810619 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" event={"ID":"983d4dc3-94ae-4eee-b315-e4c2bff97d27","Type":"ContainerStarted","Data":"05b5e7365d3ad2348a6afbe812370e74fd97fe6ebaae891cb864358483ddcdbb"} Mar 20 08:50:52.828572 master-0 kubenswrapper[27820]: I0320 08:50:52.828505 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" event={"ID":"983d4dc3-94ae-4eee-b315-e4c2bff97d27","Type":"ContainerStarted","Data":"1982705502ed5ec039a9ee13f432f3959626ce8ad1e58c8cb6a63599b0956369"} Mar 20 08:50:52.829211 master-0 kubenswrapper[27820]: I0320 08:50:52.829172 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:52.852501 master-0 kubenswrapper[27820]: I0320 08:50:52.852394 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" podStartSLOduration=9.607934689 podStartE2EDuration="11.852375617s" podCreationTimestamp="2026-03-20 08:50:41 +0000 UTC" firstStartedPulling="2026-03-20 08:50:50.210981625 +0000 UTC m=+60.306190769" lastFinishedPulling="2026-03-20 08:50:52.455422553 +0000 UTC m=+62.550631697" observedRunningTime="2026-03-20 08:50:52.851924465 +0000 UTC m=+62.947133619" watchObservedRunningTime="2026-03-20 08:50:52.852375617 +0000 UTC m=+62.947584771" Mar 20 08:50:52.971792 master-0 kubenswrapper[27820]: I0320 08:50:52.971700 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:50:54.635837 master-0 kubenswrapper[27820]: I0320 08:50:54.635749 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6c44cff755-w2zcd"] Mar 20 08:50:57.681562 master-0 kubenswrapper[27820]: I0320 08:50:57.681495 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:50:57.682387 master-0 kubenswrapper[27820]: E0320 08:50:57.681807 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:57.682387 master-0 kubenswrapper[27820]: E0320 08:50:57.681857 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:57.682387 master-0 kubenswrapper[27820]: E0320 08:50:57.681949 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:51:29.681922073 +0000 UTC m=+99.777131257 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:50:57.783484 master-0 kubenswrapper[27820]: I0320 08:50:57.783371 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:50:57.783837 master-0 kubenswrapper[27820]: E0320 08:50:57.783768 27820 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:57.783837 master-0 kubenswrapper[27820]: E0320 08:50:57.783834 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-3-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:50:57.784059 master-0 kubenswrapper[27820]: E0320 08:50:57.783933 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access podName:9775cc27-53b9-4d21-a98b-84b39ada32ee nodeName:}" failed. No retries permitted until 2026-03-20 08:51:29.783903392 +0000 UTC m=+99.879112576 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access") pod "installer-3-master-0" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 20 08:51:16.149216 master-0 kubenswrapper[27820]: I0320 08:51:16.149140 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 20 08:51:16.149740 master-0 kubenswrapper[27820]: E0320 08:51:16.149498 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 20 08:51:16.149740 master-0 kubenswrapper[27820]: I0320 08:51:16.149514 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 20 08:51:16.149740 master-0 kubenswrapper[27820]: I0320 08:51:16.149698 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7a82869988463543d3d8dd1f0b5fe3" containerName="startup-monitor" Mar 20 08:51:16.150187 master-0 kubenswrapper[27820]: I0320 08:51:16.150151 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.152943 master-0 kubenswrapper[27820]: I0320 08:51:16.152908 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 08:51:16.153386 master-0 kubenswrapper[27820]: I0320 08:51:16.153353 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-9xqm8" Mar 20 08:51:16.166707 master-0 kubenswrapper[27820]: I0320 08:51:16.166639 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 20 08:51:16.248217 master-0 kubenswrapper[27820]: I0320 08:51:16.248163 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.248419 master-0 kubenswrapper[27820]: I0320 08:51:16.248215 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e018a2b-849e-44fc-a457-169804289475-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.248419 master-0 kubenswrapper[27820]: I0320 08:51:16.248286 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-var-lock\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.349816 master-0 kubenswrapper[27820]: I0320 08:51:16.349732 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.350075 master-0 kubenswrapper[27820]: I0320 08:51:16.349870 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e018a2b-849e-44fc-a457-169804289475-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.350075 master-0 kubenswrapper[27820]: I0320 08:51:16.349910 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.350075 master-0 kubenswrapper[27820]: I0320 08:51:16.349983 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-var-lock\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.350290 master-0 kubenswrapper[27820]: I0320 08:51:16.350118 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-var-lock\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.379009 master-0 kubenswrapper[27820]: I0320 08:51:16.378945 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e018a2b-849e-44fc-a457-169804289475-kube-api-access\") pod \"installer-4-master-0\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.482363 master-0 kubenswrapper[27820]: I0320 08:51:16.482224 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:16.811449 master-0 kubenswrapper[27820]: I0320 08:51:16.811156 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 20 08:51:17.021577 master-0 kubenswrapper[27820]: I0320 08:51:17.021518 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9e018a2b-849e-44fc-a457-169804289475","Type":"ContainerStarted","Data":"c472b2ca43f1efde41d53da747fe04697ea1a8cf9cb597301f16b18bd3db8622"} Mar 20 08:51:18.031215 master-0 kubenswrapper[27820]: I0320 08:51:18.031141 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9e018a2b-849e-44fc-a457-169804289475","Type":"ContainerStarted","Data":"39dceb61eacdd602aa8fdff99152b3f84b543720874d7be4e51bcd0dcea55336"} Mar 20 08:51:20.878577 master-0 kubenswrapper[27820]: I0320 08:51:20.878497 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" podUID="983d4dc3-94ae-4eee-b315-e4c2bff97d27" containerName="oauth-openshift" containerID="cri-o://1982705502ed5ec039a9ee13f432f3959626ce8ad1e58c8cb6a63599b0956369" gracePeriod=15 Mar 20 08:51:21.054062 master-0 kubenswrapper[27820]: I0320 08:51:21.054000 27820 generic.go:334] "Generic (PLEG): container finished" podID="983d4dc3-94ae-4eee-b315-e4c2bff97d27" containerID="1982705502ed5ec039a9ee13f432f3959626ce8ad1e58c8cb6a63599b0956369" exitCode=0 Mar 20 08:51:21.054062 master-0 kubenswrapper[27820]: I0320 08:51:21.054047 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" event={"ID":"983d4dc3-94ae-4eee-b315-e4c2bff97d27","Type":"ContainerDied","Data":"1982705502ed5ec039a9ee13f432f3959626ce8ad1e58c8cb6a63599b0956369"} Mar 20 08:51:21.879742 master-0 kubenswrapper[27820]: I0320 08:51:21.879686 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:51:21.900136 master-0 kubenswrapper[27820]: I0320 08:51:21.899965 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=5.899946085 podStartE2EDuration="5.899946085s" podCreationTimestamp="2026-03-20 08:51:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:18.070573028 +0000 UTC m=+88.165782232" watchObservedRunningTime="2026-03-20 08:51:21.899946085 +0000 UTC m=+91.995155249" Mar 20 08:51:21.917903 master-0 kubenswrapper[27820]: I0320 08:51:21.917827 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-b45dccc8f-nt7jw"] Mar 20 08:51:21.918312 master-0 kubenswrapper[27820]: E0320 08:51:21.918145 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="983d4dc3-94ae-4eee-b315-e4c2bff97d27" containerName="oauth-openshift" Mar 20 08:51:21.918312 master-0 kubenswrapper[27820]: I0320 08:51:21.918164 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="983d4dc3-94ae-4eee-b315-e4c2bff97d27" containerName="oauth-openshift" Mar 20 08:51:21.918606 master-0 kubenswrapper[27820]: I0320 08:51:21.918382 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="983d4dc3-94ae-4eee-b315-e4c2bff97d27" containerName="oauth-openshift" Mar 20 08:51:21.918946 master-0 kubenswrapper[27820]: I0320 08:51:21.918882 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930198 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-trusted-ca-bundle\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930254 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-dir\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930306 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-router-certs\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930339 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-ocp-branding-template\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930362 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-error\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930404 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-provider-selection\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930428 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-serving-cert\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930448 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-service-ca\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930504 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-login\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930542 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6mw4\" (UniqueName: \"kubernetes.io/projected/983d4dc3-94ae-4eee-b315-e4c2bff97d27-kube-api-access-n6mw4\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930570 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-policies\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930620 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930673 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-session\") pod \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\" (UID: \"983d4dc3-94ae-4eee-b315-e4c2bff97d27\") " Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930818 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930857 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-policies\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930887 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-error\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930942 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.930997 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931023 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-dir\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931048 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931077 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931107 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqxcd\" (UniqueName: \"kubernetes.io/projected/5d487313-8796-4bf7-8ac5-051f76b021e5-kube-api-access-dqxcd\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931130 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931156 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931180 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-session\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931201 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-login\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931797 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.931835 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.932844 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.933393 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:21.938955 master-0 kubenswrapper[27820]: I0320 08:51:21.938365 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:21.945002 master-0 kubenswrapper[27820]: I0320 08:51:21.944931 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:21.945444 master-0 kubenswrapper[27820]: I0320 08:51:21.945391 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/983d4dc3-94ae-4eee-b315-e4c2bff97d27-kube-api-access-n6mw4" (OuterVolumeSpecName: "kube-api-access-n6mw4") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "kube-api-access-n6mw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:21.945557 master-0 kubenswrapper[27820]: I0320 08:51:21.945476 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:21.945886 master-0 kubenswrapper[27820]: I0320 08:51:21.945846 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:21.946486 master-0 kubenswrapper[27820]: I0320 08:51:21.946443 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:21.946627 master-0 kubenswrapper[27820]: I0320 08:51:21.946530 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b45dccc8f-nt7jw"] Mar 20 08:51:21.947649 master-0 kubenswrapper[27820]: I0320 08:51:21.947598 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:21.947649 master-0 kubenswrapper[27820]: I0320 08:51:21.947585 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:21.947790 master-0 kubenswrapper[27820]: I0320 08:51:21.947702 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "983d4dc3-94ae-4eee-b315-e4c2bff97d27" (UID: "983d4dc3-94ae-4eee-b315-e4c2bff97d27"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:22.032361 master-0 kubenswrapper[27820]: I0320 08:51:22.032306 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032361 master-0 kubenswrapper[27820]: I0320 08:51:22.032350 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-dir\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032369 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032397 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032419 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqxcd\" (UniqueName: \"kubernetes.io/projected/5d487313-8796-4bf7-8ac5-051f76b021e5-kube-api-access-dqxcd\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032435 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032453 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032467 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-session\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032481 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-login\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032508 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032528 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-policies\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032543 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-error\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032578 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032627 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032638 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.032636 master-0 kubenswrapper[27820]: I0320 08:51:22.032648 27820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032660 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032670 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032679 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032688 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032697 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032705 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032716 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032725 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6mw4\" (UniqueName: \"kubernetes.io/projected/983d4dc3-94ae-4eee-b315-e4c2bff97d27-kube-api-access-n6mw4\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032733 27820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.032742 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/983d4dc3-94ae-4eee-b315-e4c2bff97d27-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:22.033185 master-0 kubenswrapper[27820]: I0320 08:51:22.033147 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-service-ca\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.033649 master-0 kubenswrapper[27820]: I0320 08:51:22.033591 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-dir\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.034572 master-0 kubenswrapper[27820]: I0320 08:51:22.034528 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.034736 master-0 kubenswrapper[27820]: I0320 08:51:22.034710 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-policies\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.035383 master-0 kubenswrapper[27820]: I0320 08:51:22.035332 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-cliconfig\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.035815 master-0 kubenswrapper[27820]: I0320 08:51:22.035789 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-serving-cert\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.037138 master-0 kubenswrapper[27820]: I0320 08:51:22.037093 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-login\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.037602 master-0 kubenswrapper[27820]: I0320 08:51:22.037548 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-error\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.038290 master-0 kubenswrapper[27820]: I0320 08:51:22.038236 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.038470 master-0 kubenswrapper[27820]: I0320 08:51:22.038413 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.039095 master-0 kubenswrapper[27820]: I0320 08:51:22.039032 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-router-certs\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.044135 master-0 kubenswrapper[27820]: I0320 08:51:22.044067 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-session\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.050973 master-0 kubenswrapper[27820]: I0320 08:51:22.050945 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqxcd\" (UniqueName: \"kubernetes.io/projected/5d487313-8796-4bf7-8ac5-051f76b021e5-kube-api-access-dqxcd\") pod \"oauth-openshift-b45dccc8f-nt7jw\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.062278 master-0 kubenswrapper[27820]: I0320 08:51:22.062217 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" event={"ID":"983d4dc3-94ae-4eee-b315-e4c2bff97d27","Type":"ContainerDied","Data":"05b5e7365d3ad2348a6afbe812370e74fd97fe6ebaae891cb864358483ddcdbb"} Mar 20 08:51:22.062357 master-0 kubenswrapper[27820]: I0320 08:51:22.062295 27820 scope.go:117] "RemoveContainer" containerID="1982705502ed5ec039a9ee13f432f3959626ce8ad1e58c8cb6a63599b0956369" Mar 20 08:51:22.062357 master-0 kubenswrapper[27820]: I0320 08:51:22.062302 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6c44cff755-w2zcd" Mar 20 08:51:22.127940 master-0 kubenswrapper[27820]: I0320 08:51:22.127797 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-6c44cff755-w2zcd"] Mar 20 08:51:22.134509 master-0 kubenswrapper[27820]: I0320 08:51:22.134430 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-6c44cff755-w2zcd"] Mar 20 08:51:22.305166 master-0 kubenswrapper[27820]: I0320 08:51:22.305076 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:22.551321 master-0 kubenswrapper[27820]: I0320 08:51:22.547909 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc85986b9-8p79x"] Mar 20 08:51:22.551321 master-0 kubenswrapper[27820]: I0320 08:51:22.548224 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" podUID="41ac891d-b41d-43c4-be46-35f39671477a" containerName="controller-manager" containerID="cri-o://9bbc62f41eb9cddabece6ee46b25a672dc565f68843a8cfb4ee6a9d70bc8ddf1" gracePeriod=30 Mar 20 08:51:22.658359 master-0 kubenswrapper[27820]: I0320 08:51:22.657951 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv"] Mar 20 08:51:22.658359 master-0 kubenswrapper[27820]: I0320 08:51:22.658241 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" podUID="240ba61a-e439-4f94-b9b3-7903b9b1bc05" containerName="route-controller-manager" containerID="cri-o://03e9b975d965f8ec377b68d24d75b91897161659c0e305cafe8e9368b9999d09" gracePeriod=30 Mar 20 08:51:22.797311 master-0 kubenswrapper[27820]: I0320 08:51:22.797242 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-b45dccc8f-nt7jw"] Mar 20 08:51:23.070802 master-0 kubenswrapper[27820]: I0320 08:51:23.070720 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" event={"ID":"5d487313-8796-4bf7-8ac5-051f76b021e5","Type":"ContainerStarted","Data":"8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd"} Mar 20 08:51:23.071620 master-0 kubenswrapper[27820]: I0320 08:51:23.071566 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" event={"ID":"5d487313-8796-4bf7-8ac5-051f76b021e5","Type":"ContainerStarted","Data":"2e6dda4f55c7d88a4d20a5f62a7493f8f34d0739573c38a4abbb062efb0b5502"} Mar 20 08:51:23.071786 master-0 kubenswrapper[27820]: I0320 08:51:23.071746 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:23.072672 master-0 kubenswrapper[27820]: I0320 08:51:23.072593 27820 patch_prober.go:28] interesting pod/oauth-openshift-b45dccc8f-nt7jw container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.128.0.91:6443/healthz\": dial tcp 10.128.0.91:6443: connect: connection refused" start-of-body= Mar 20 08:51:23.072777 master-0 kubenswrapper[27820]: I0320 08:51:23.072685 27820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" podUID="5d487313-8796-4bf7-8ac5-051f76b021e5" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.128.0.91:6443/healthz\": dial tcp 10.128.0.91:6443: connect: connection refused" Mar 20 08:51:23.073107 master-0 kubenswrapper[27820]: I0320 08:51:23.073062 27820 generic.go:334] "Generic (PLEG): container finished" podID="41ac891d-b41d-43c4-be46-35f39671477a" containerID="9bbc62f41eb9cddabece6ee46b25a672dc565f68843a8cfb4ee6a9d70bc8ddf1" exitCode=0 Mar 20 08:51:23.073442 master-0 kubenswrapper[27820]: I0320 08:51:23.073399 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" event={"ID":"41ac891d-b41d-43c4-be46-35f39671477a","Type":"ContainerDied","Data":"9bbc62f41eb9cddabece6ee46b25a672dc565f68843a8cfb4ee6a9d70bc8ddf1"} Mar 20 08:51:23.077374 master-0 kubenswrapper[27820]: I0320 08:51:23.077244 27820 generic.go:334] "Generic (PLEG): container finished" podID="240ba61a-e439-4f94-b9b3-7903b9b1bc05" containerID="03e9b975d965f8ec377b68d24d75b91897161659c0e305cafe8e9368b9999d09" exitCode=0 Mar 20 08:51:23.077467 master-0 kubenswrapper[27820]: I0320 08:51:23.077397 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" event={"ID":"240ba61a-e439-4f94-b9b3-7903b9b1bc05","Type":"ContainerDied","Data":"03e9b975d965f8ec377b68d24d75b91897161659c0e305cafe8e9368b9999d09"} Mar 20 08:51:23.098051 master-0 kubenswrapper[27820]: I0320 08:51:23.097919 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" podStartSLOduration=29.097902113 podStartE2EDuration="29.097902113s" podCreationTimestamp="2026-03-20 08:50:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:23.095758394 +0000 UTC m=+93.190967558" watchObservedRunningTime="2026-03-20 08:51:23.097902113 +0000 UTC m=+93.193111257" Mar 20 08:51:23.154914 master-0 kubenswrapper[27820]: I0320 08:51:23.154878 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:51:23.159873 master-0 kubenswrapper[27820]: I0320 08:51:23.159839 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:51:23.254394 master-0 kubenswrapper[27820]: I0320 08:51:23.254191 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") pod \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " Mar 20 08:51:23.254394 master-0 kubenswrapper[27820]: I0320 08:51:23.254256 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") pod \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " Mar 20 08:51:23.254636 master-0 kubenswrapper[27820]: I0320 08:51:23.254426 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") pod \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " Mar 20 08:51:23.254636 master-0 kubenswrapper[27820]: I0320 08:51:23.254481 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") pod \"41ac891d-b41d-43c4-be46-35f39671477a\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " Mar 20 08:51:23.254636 master-0 kubenswrapper[27820]: I0320 08:51:23.254519 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") pod \"41ac891d-b41d-43c4-be46-35f39671477a\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " Mar 20 08:51:23.254636 master-0 kubenswrapper[27820]: I0320 08:51:23.254562 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") pod \"41ac891d-b41d-43c4-be46-35f39671477a\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " Mar 20 08:51:23.254636 master-0 kubenswrapper[27820]: I0320 08:51:23.254596 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") pod \"41ac891d-b41d-43c4-be46-35f39671477a\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " Mar 20 08:51:23.254636 master-0 kubenswrapper[27820]: I0320 08:51:23.254624 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") pod \"41ac891d-b41d-43c4-be46-35f39671477a\" (UID: \"41ac891d-b41d-43c4-be46-35f39671477a\") " Mar 20 08:51:23.254878 master-0 kubenswrapper[27820]: I0320 08:51:23.254663 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") pod \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\" (UID: \"240ba61a-e439-4f94-b9b3-7903b9b1bc05\") " Mar 20 08:51:23.255730 master-0 kubenswrapper[27820]: I0320 08:51:23.255014 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca" (OuterVolumeSpecName: "client-ca") pod "240ba61a-e439-4f94-b9b3-7903b9b1bc05" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:23.255730 master-0 kubenswrapper[27820]: I0320 08:51:23.255052 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config" (OuterVolumeSpecName: "config") pod "240ba61a-e439-4f94-b9b3-7903b9b1bc05" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:23.255730 master-0 kubenswrapper[27820]: I0320 08:51:23.255375 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca" (OuterVolumeSpecName: "client-ca") pod "41ac891d-b41d-43c4-be46-35f39671477a" (UID: "41ac891d-b41d-43c4-be46-35f39671477a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:23.255903 master-0 kubenswrapper[27820]: I0320 08:51:23.255736 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "41ac891d-b41d-43c4-be46-35f39671477a" (UID: "41ac891d-b41d-43c4-be46-35f39671477a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:23.255903 master-0 kubenswrapper[27820]: I0320 08:51:23.255790 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config" (OuterVolumeSpecName: "config") pod "41ac891d-b41d-43c4-be46-35f39671477a" (UID: "41ac891d-b41d-43c4-be46-35f39671477a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:23.257494 master-0 kubenswrapper[27820]: I0320 08:51:23.257443 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh" (OuterVolumeSpecName: "kube-api-access-92pwh") pod "240ba61a-e439-4f94-b9b3-7903b9b1bc05" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05"). InnerVolumeSpecName "kube-api-access-92pwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:23.261633 master-0 kubenswrapper[27820]: I0320 08:51:23.261581 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "41ac891d-b41d-43c4-be46-35f39671477a" (UID: "41ac891d-b41d-43c4-be46-35f39671477a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:23.262517 master-0 kubenswrapper[27820]: I0320 08:51:23.262465 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq" (OuterVolumeSpecName: "kube-api-access-zpksq") pod "41ac891d-b41d-43c4-be46-35f39671477a" (UID: "41ac891d-b41d-43c4-be46-35f39671477a"). InnerVolumeSpecName "kube-api-access-zpksq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:23.264436 master-0 kubenswrapper[27820]: I0320 08:51:23.264390 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "240ba61a-e439-4f94-b9b3-7903b9b1bc05" (UID: "240ba61a-e439-4f94-b9b3-7903b9b1bc05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:23.355857 master-0 kubenswrapper[27820]: I0320 08:51:23.355783 27820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.355857 master-0 kubenswrapper[27820]: I0320 08:51:23.355834 27820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/41ac891d-b41d-43c4-be46-35f39671477a-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.355857 master-0 kubenswrapper[27820]: I0320 08:51:23.355856 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zpksq\" (UniqueName: \"kubernetes.io/projected/41ac891d-b41d-43c4-be46-35f39671477a-kube-api-access-zpksq\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.356225 master-0 kubenswrapper[27820]: I0320 08:51:23.355874 27820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.356225 master-0 kubenswrapper[27820]: I0320 08:51:23.355892 27820 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/41ac891d-b41d-43c4-be46-35f39671477a-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.356225 master-0 kubenswrapper[27820]: I0320 08:51:23.355907 27820 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/240ba61a-e439-4f94-b9b3-7903b9b1bc05-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.356225 master-0 kubenswrapper[27820]: I0320 08:51:23.355923 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92pwh\" (UniqueName: \"kubernetes.io/projected/240ba61a-e439-4f94-b9b3-7903b9b1bc05-kube-api-access-92pwh\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.356225 master-0 kubenswrapper[27820]: I0320 08:51:23.355934 27820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.356225 master-0 kubenswrapper[27820]: I0320 08:51:23.355947 27820 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/240ba61a-e439-4f94-b9b3-7903b9b1bc05-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:23.739228 master-0 kubenswrapper[27820]: I0320 08:51:23.739167 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 20 08:51:23.739498 master-0 kubenswrapper[27820]: I0320 08:51:23.739422 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="9e018a2b-849e-44fc-a457-169804289475" containerName="installer" containerID="cri-o://39dceb61eacdd602aa8fdff99152b3f84b543720874d7be4e51bcd0dcea55336" gracePeriod=30 Mar 20 08:51:23.886531 master-0 kubenswrapper[27820]: I0320 08:51:23.886491 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78b7dc7ccd-b95js"] Mar 20 08:51:23.886971 master-0 kubenswrapper[27820]: E0320 08:51:23.886957 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="240ba61a-e439-4f94-b9b3-7903b9b1bc05" containerName="route-controller-manager" Mar 20 08:51:23.887037 master-0 kubenswrapper[27820]: I0320 08:51:23.887027 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="240ba61a-e439-4f94-b9b3-7903b9b1bc05" containerName="route-controller-manager" Mar 20 08:51:23.887108 master-0 kubenswrapper[27820]: E0320 08:51:23.887098 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ac891d-b41d-43c4-be46-35f39671477a" containerName="controller-manager" Mar 20 08:51:23.887162 master-0 kubenswrapper[27820]: I0320 08:51:23.887153 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ac891d-b41d-43c4-be46-35f39671477a" containerName="controller-manager" Mar 20 08:51:23.887378 master-0 kubenswrapper[27820]: I0320 08:51:23.887349 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ac891d-b41d-43c4-be46-35f39671477a" containerName="controller-manager" Mar 20 08:51:23.887479 master-0 kubenswrapper[27820]: I0320 08:51:23.887469 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="240ba61a-e439-4f94-b9b3-7903b9b1bc05" containerName="route-controller-manager" Mar 20 08:51:23.887891 master-0 kubenswrapper[27820]: I0320 08:51:23.887877 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:23.893215 master-0 kubenswrapper[27820]: I0320 08:51:23.893148 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq"] Mar 20 08:51:23.894640 master-0 kubenswrapper[27820]: I0320 08:51:23.894605 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:23.921956 master-0 kubenswrapper[27820]: I0320 08:51:23.921889 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq"] Mar 20 08:51:23.950372 master-0 kubenswrapper[27820]: I0320 08:51:23.948172 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78b7dc7ccd-b95js"] Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964550 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mz8\" (UniqueName: \"kubernetes.io/projected/c87a00c4-7be0-4245-9092-f8ee61285960-kube-api-access-89mz8\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964658 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-config\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964735 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87a00c4-7be0-4245-9092-f8ee61285960-config\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964801 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dmm2\" (UniqueName: \"kubernetes.io/projected/7ac8de34-8a0a-400d-b360-198dbdff9b45-kube-api-access-5dmm2\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964851 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87a00c4-7be0-4245-9092-f8ee61285960-serving-cert\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964905 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-client-ca\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964942 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac8de34-8a0a-400d-b360-198dbdff9b45-serving-cert\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.964972 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-proxy-ca-bundles\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:23.965641 master-0 kubenswrapper[27820]: I0320 08:51:23.965014 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87a00c4-7be0-4245-9092-f8ee61285960-client-ca\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.066232 master-0 kubenswrapper[27820]: I0320 08:51:24.066076 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89mz8\" (UniqueName: \"kubernetes.io/projected/c87a00c4-7be0-4245-9092-f8ee61285960-kube-api-access-89mz8\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.066232 master-0 kubenswrapper[27820]: I0320 08:51:24.066176 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-config\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.066232 master-0 kubenswrapper[27820]: I0320 08:51:24.066232 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87a00c4-7be0-4245-9092-f8ee61285960-config\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.066538 master-0 kubenswrapper[27820]: I0320 08:51:24.066318 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dmm2\" (UniqueName: \"kubernetes.io/projected/7ac8de34-8a0a-400d-b360-198dbdff9b45-kube-api-access-5dmm2\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.066538 master-0 kubenswrapper[27820]: I0320 08:51:24.066367 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87a00c4-7be0-4245-9092-f8ee61285960-serving-cert\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.066538 master-0 kubenswrapper[27820]: I0320 08:51:24.066417 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-client-ca\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.066538 master-0 kubenswrapper[27820]: I0320 08:51:24.066452 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-proxy-ca-bundles\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.066538 master-0 kubenswrapper[27820]: I0320 08:51:24.066484 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac8de34-8a0a-400d-b360-198dbdff9b45-serving-cert\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.066538 master-0 kubenswrapper[27820]: I0320 08:51:24.066522 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87a00c4-7be0-4245-9092-f8ee61285960-client-ca\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.067621 master-0 kubenswrapper[27820]: I0320 08:51:24.067586 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-config\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.069804 master-0 kubenswrapper[27820]: I0320 08:51:24.069767 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-proxy-ca-bundles\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.070874 master-0 kubenswrapper[27820]: I0320 08:51:24.070629 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7ac8de34-8a0a-400d-b360-198dbdff9b45-client-ca\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.071217 master-0 kubenswrapper[27820]: I0320 08:51:24.070939 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c87a00c4-7be0-4245-9092-f8ee61285960-client-ca\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.072003 master-0 kubenswrapper[27820]: I0320 08:51:24.071668 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c87a00c4-7be0-4245-9092-f8ee61285960-config\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.072710 master-0 kubenswrapper[27820]: I0320 08:51:24.072497 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7ac8de34-8a0a-400d-b360-198dbdff9b45-serving-cert\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.074734 master-0 kubenswrapper[27820]: I0320 08:51:24.074700 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c87a00c4-7be0-4245-9092-f8ee61285960-serving-cert\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.083831 master-0 kubenswrapper[27820]: I0320 08:51:24.083793 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="983d4dc3-94ae-4eee-b315-e4c2bff97d27" path="/var/lib/kubelet/pods/983d4dc3-94ae-4eee-b315-e4c2bff97d27/volumes" Mar 20 08:51:24.086817 master-0 kubenswrapper[27820]: I0320 08:51:24.086795 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" Mar 20 08:51:24.086920 master-0 kubenswrapper[27820]: I0320 08:51:24.086836 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-bc85986b9-8p79x" event={"ID":"41ac891d-b41d-43c4-be46-35f39671477a","Type":"ContainerDied","Data":"b7b1e72d13c6e7c1a14867c5547562b82b9b40ac636f0328d795dcff8a14b2b8"} Mar 20 08:51:24.086920 master-0 kubenswrapper[27820]: I0320 08:51:24.086899 27820 scope.go:117] "RemoveContainer" containerID="9bbc62f41eb9cddabece6ee46b25a672dc565f68843a8cfb4ee6a9d70bc8ddf1" Mar 20 08:51:24.090954 master-0 kubenswrapper[27820]: I0320 08:51:24.090925 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" Mar 20 08:51:24.091694 master-0 kubenswrapper[27820]: I0320 08:51:24.091663 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dmm2\" (UniqueName: \"kubernetes.io/projected/7ac8de34-8a0a-400d-b360-198dbdff9b45-kube-api-access-5dmm2\") pod \"controller-manager-78b7dc7ccd-b95js\" (UID: \"7ac8de34-8a0a-400d-b360-198dbdff9b45\") " pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.091896 master-0 kubenswrapper[27820]: I0320 08:51:24.091858 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv" event={"ID":"240ba61a-e439-4f94-b9b3-7903b9b1bc05","Type":"ContainerDied","Data":"a9a866857afbf6e04b88e6394f6ac26a86a5cc6b5f41292fe9d43cc355b22810"} Mar 20 08:51:24.094970 master-0 kubenswrapper[27820]: I0320 08:51:24.094499 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89mz8\" (UniqueName: \"kubernetes.io/projected/c87a00c4-7be0-4245-9092-f8ee61285960-kube-api-access-89mz8\") pod \"route-controller-manager-fd6c5d8fc-4srzq\" (UID: \"c87a00c4-7be0-4245-9092-f8ee61285960\") " pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.097764 master-0 kubenswrapper[27820]: I0320 08:51:24.097666 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:51:24.186584 master-0 kubenswrapper[27820]: I0320 08:51:24.184445 27820 scope.go:117] "RemoveContainer" containerID="03e9b975d965f8ec377b68d24d75b91897161659c0e305cafe8e9368b9999d09" Mar 20 08:51:24.214640 master-0 kubenswrapper[27820]: I0320 08:51:24.214087 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:24.227056 master-0 kubenswrapper[27820]: I0320 08:51:24.226998 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:24.239764 master-0 kubenswrapper[27820]: I0320 08:51:24.239696 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv"] Mar 20 08:51:24.246152 master-0 kubenswrapper[27820]: I0320 08:51:24.246083 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d86cb9b59-smbxv"] Mar 20 08:51:24.261002 master-0 kubenswrapper[27820]: I0320 08:51:24.260946 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-bc85986b9-8p79x"] Mar 20 08:51:24.271725 master-0 kubenswrapper[27820]: I0320 08:51:24.270176 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-bc85986b9-8p79x"] Mar 20 08:51:24.743744 master-0 kubenswrapper[27820]: I0320 08:51:24.743595 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78b7dc7ccd-b95js"] Mar 20 08:51:24.755249 master-0 kubenswrapper[27820]: W0320 08:51:24.755174 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ac8de34_8a0a_400d_b360_198dbdff9b45.slice/crio-89aaaed65936e861f98713d6b74624b958a07900f4b62ca1801e2ec99f5685ca WatchSource:0}: Error finding container 89aaaed65936e861f98713d6b74624b958a07900f4b62ca1801e2ec99f5685ca: Status 404 returned error can't find the container with id 89aaaed65936e861f98713d6b74624b958a07900f4b62ca1801e2ec99f5685ca Mar 20 08:51:24.810995 master-0 kubenswrapper[27820]: I0320 08:51:24.810921 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq"] Mar 20 08:51:25.097884 master-0 kubenswrapper[27820]: I0320 08:51:25.097798 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" event={"ID":"7ac8de34-8a0a-400d-b360-198dbdff9b45","Type":"ContainerStarted","Data":"0991eca757295e1cc4af7708e86283b52aa93940b29df9757d5623468666f24b"} Mar 20 08:51:25.097884 master-0 kubenswrapper[27820]: I0320 08:51:25.097849 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" event={"ID":"7ac8de34-8a0a-400d-b360-198dbdff9b45","Type":"ContainerStarted","Data":"89aaaed65936e861f98713d6b74624b958a07900f4b62ca1801e2ec99f5685ca"} Mar 20 08:51:25.098689 master-0 kubenswrapper[27820]: I0320 08:51:25.098114 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:25.099704 master-0 kubenswrapper[27820]: I0320 08:51:25.099650 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" event={"ID":"c87a00c4-7be0-4245-9092-f8ee61285960","Type":"ContainerStarted","Data":"18b2a877bd83476549323d01174b657bdac0c407641dcce971437d403573b3f8"} Mar 20 08:51:25.099769 master-0 kubenswrapper[27820]: I0320 08:51:25.099706 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" event={"ID":"c87a00c4-7be0-4245-9092-f8ee61285960","Type":"ContainerStarted","Data":"fa0402ad3a3d870ff8e6c9127dd796ac6919b6f94db7ffcdce84e1284e65d6f6"} Mar 20 08:51:25.099996 master-0 kubenswrapper[27820]: I0320 08:51:25.099954 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:25.101571 master-0 kubenswrapper[27820]: I0320 08:51:25.101531 27820 patch_prober.go:28] interesting pod/route-controller-manager-fd6c5d8fc-4srzq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.93:8443/healthz\": dial tcp 10.128.0.93:8443: connect: connection refused" start-of-body= Mar 20 08:51:25.101643 master-0 kubenswrapper[27820]: I0320 08:51:25.101587 27820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" podUID="c87a00c4-7be0-4245-9092-f8ee61285960" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.93:8443/healthz\": dial tcp 10.128.0.93:8443: connect: connection refused" Mar 20 08:51:25.107513 master-0 kubenswrapper[27820]: I0320 08:51:25.107449 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" Mar 20 08:51:25.126342 master-0 kubenswrapper[27820]: I0320 08:51:25.126210 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-78b7dc7ccd-b95js" podStartSLOduration=3.126182401 podStartE2EDuration="3.126182401s" podCreationTimestamp="2026-03-20 08:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:25.122966943 +0000 UTC m=+95.218176107" watchObservedRunningTime="2026-03-20 08:51:25.126182401 +0000 UTC m=+95.221391555" Mar 20 08:51:25.197312 master-0 kubenswrapper[27820]: I0320 08:51:25.195709 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" podStartSLOduration=3.195689325 podStartE2EDuration="3.195689325s" podCreationTimestamp="2026-03-20 08:51:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:25.193922366 +0000 UTC m=+95.289131540" watchObservedRunningTime="2026-03-20 08:51:25.195689325 +0000 UTC m=+95.290898469" Mar 20 08:51:26.090245 master-0 kubenswrapper[27820]: I0320 08:51:26.090174 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="240ba61a-e439-4f94-b9b3-7903b9b1bc05" path="/var/lib/kubelet/pods/240ba61a-e439-4f94-b9b3-7903b9b1bc05/volumes" Mar 20 08:51:26.091615 master-0 kubenswrapper[27820]: I0320 08:51:26.091575 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ac891d-b41d-43c4-be46-35f39671477a" path="/var/lib/kubelet/pods/41ac891d-b41d-43c4-be46-35f39671477a/volumes" Mar 20 08:51:26.119364 master-0 kubenswrapper[27820]: I0320 08:51:26.119311 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fd6c5d8fc-4srzq" Mar 20 08:51:26.939067 master-0 kubenswrapper[27820]: I0320 08:51:26.938980 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 20 08:51:26.940034 master-0 kubenswrapper[27820]: I0320 08:51:26.939992 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:26.953819 master-0 kubenswrapper[27820]: I0320 08:51:26.953764 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 20 08:51:27.051118 master-0 kubenswrapper[27820]: I0320 08:51:27.051012 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kube-api-access\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.051338 master-0 kubenswrapper[27820]: I0320 08:51:27.051132 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-var-lock\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.051473 master-0 kubenswrapper[27820]: I0320 08:51:27.051410 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.152909 master-0 kubenswrapper[27820]: I0320 08:51:27.152818 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.153407 master-0 kubenswrapper[27820]: I0320 08:51:27.153039 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.153407 master-0 kubenswrapper[27820]: I0320 08:51:27.153146 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kube-api-access\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.153407 master-0 kubenswrapper[27820]: I0320 08:51:27.153239 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-var-lock\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.153507 master-0 kubenswrapper[27820]: I0320 08:51:27.153422 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-var-lock\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.174887 master-0 kubenswrapper[27820]: I0320 08:51:27.174815 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kube-api-access\") pod \"installer-5-master-0\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.266414 master-0 kubenswrapper[27820]: I0320 08:51:27.266148 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:51:27.738771 master-0 kubenswrapper[27820]: I0320 08:51:27.738689 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 20 08:51:27.749234 master-0 kubenswrapper[27820]: W0320 08:51:27.749151 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod78ae02f0_5d31_4fda_a63a_534f60df5d1f.slice/crio-4a237f516af96060c766d77c95a19c46f9ec0e542ddf09d3d4a7b89e62d71fa6 WatchSource:0}: Error finding container 4a237f516af96060c766d77c95a19c46f9ec0e542ddf09d3d4a7b89e62d71fa6: Status 404 returned error can't find the container with id 4a237f516af96060c766d77c95a19c46f9ec0e542ddf09d3d4a7b89e62d71fa6 Mar 20 08:51:28.126133 master-0 kubenswrapper[27820]: I0320 08:51:28.126080 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"78ae02f0-5d31-4fda-a63a-534f60df5d1f","Type":"ContainerStarted","Data":"870faed07a22d4978e0ed50111cb06dde9ad2c0dca51e752f685fc10dd88324d"} Mar 20 08:51:28.126356 master-0 kubenswrapper[27820]: I0320 08:51:28.126142 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"78ae02f0-5d31-4fda-a63a-534f60df5d1f","Type":"ContainerStarted","Data":"4a237f516af96060c766d77c95a19c46f9ec0e542ddf09d3d4a7b89e62d71fa6"} Mar 20 08:51:28.150748 master-0 kubenswrapper[27820]: I0320 08:51:28.150664 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=2.1506446 podStartE2EDuration="2.1506446s" podCreationTimestamp="2026-03-20 08:51:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:28.150494045 +0000 UTC m=+98.245703199" watchObservedRunningTime="2026-03-20 08:51:28.1506446 +0000 UTC m=+98.245853744" Mar 20 08:51:29.694115 master-0 kubenswrapper[27820]: I0320 08:51:29.694029 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:51:29.695344 master-0 kubenswrapper[27820]: E0320 08:51:29.694206 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:51:29.695344 master-0 kubenswrapper[27820]: E0320 08:51:29.694250 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:51:29.695344 master-0 kubenswrapper[27820]: E0320 08:51:29.694334 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:52:33.694316885 +0000 UTC m=+163.789526029 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:51:29.795507 master-0 kubenswrapper[27820]: I0320 08:51:29.795329 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:51:29.802672 master-0 kubenswrapper[27820]: I0320 08:51:29.802301 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"installer-3-master-0\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 20 08:51:29.896822 master-0 kubenswrapper[27820]: I0320 08:51:29.896715 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") pod \"9775cc27-53b9-4d21-a98b-84b39ada32ee\" (UID: \"9775cc27-53b9-4d21-a98b-84b39ada32ee\") " Mar 20 08:51:29.900507 master-0 kubenswrapper[27820]: I0320 08:51:29.900419 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9775cc27-53b9-4d21-a98b-84b39ada32ee" (UID: "9775cc27-53b9-4d21-a98b-84b39ada32ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:29.997911 master-0 kubenswrapper[27820]: I0320 08:51:29.997729 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9775cc27-53b9-4d21-a98b-84b39ada32ee-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:48.305637 master-0 kubenswrapper[27820]: I0320 08:51:48.305571 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_9e018a2b-849e-44fc-a457-169804289475/installer/0.log" Mar 20 08:51:48.306575 master-0 kubenswrapper[27820]: I0320 08:51:48.305646 27820 generic.go:334] "Generic (PLEG): container finished" podID="9e018a2b-849e-44fc-a457-169804289475" containerID="39dceb61eacdd602aa8fdff99152b3f84b543720874d7be4e51bcd0dcea55336" exitCode=1 Mar 20 08:51:48.306575 master-0 kubenswrapper[27820]: I0320 08:51:48.305691 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9e018a2b-849e-44fc-a457-169804289475","Type":"ContainerDied","Data":"39dceb61eacdd602aa8fdff99152b3f84b543720874d7be4e51bcd0dcea55336"} Mar 20 08:51:48.713634 master-0 kubenswrapper[27820]: I0320 08:51:48.713543 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_9e018a2b-849e-44fc-a457-169804289475/installer/0.log" Mar 20 08:51:48.713634 master-0 kubenswrapper[27820]: I0320 08:51:48.713622 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:48.714555 master-0 kubenswrapper[27820]: I0320 08:51:48.714514 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-var-lock\") pod \"9e018a2b-849e-44fc-a457-169804289475\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " Mar 20 08:51:48.714708 master-0 kubenswrapper[27820]: I0320 08:51:48.714572 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-kubelet-dir\") pod \"9e018a2b-849e-44fc-a457-169804289475\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " Mar 20 08:51:48.714708 master-0 kubenswrapper[27820]: I0320 08:51:48.714592 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-var-lock" (OuterVolumeSpecName: "var-lock") pod "9e018a2b-849e-44fc-a457-169804289475" (UID: "9e018a2b-849e-44fc-a457-169804289475"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:51:48.714708 master-0 kubenswrapper[27820]: I0320 08:51:48.714618 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e018a2b-849e-44fc-a457-169804289475-kube-api-access\") pod \"9e018a2b-849e-44fc-a457-169804289475\" (UID: \"9e018a2b-849e-44fc-a457-169804289475\") " Mar 20 08:51:48.714708 master-0 kubenswrapper[27820]: I0320 08:51:48.714689 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9e018a2b-849e-44fc-a457-169804289475" (UID: "9e018a2b-849e-44fc-a457-169804289475"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:51:48.715001 master-0 kubenswrapper[27820]: I0320 08:51:48.714875 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:48.715001 master-0 kubenswrapper[27820]: I0320 08:51:48.714890 27820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9e018a2b-849e-44fc-a457-169804289475-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:48.719765 master-0 kubenswrapper[27820]: I0320 08:51:48.719696 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e018a2b-849e-44fc-a457-169804289475-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9e018a2b-849e-44fc-a457-169804289475" (UID: "9e018a2b-849e-44fc-a457-169804289475"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:48.815758 master-0 kubenswrapper[27820]: I0320 08:51:48.815685 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9e018a2b-849e-44fc-a457-169804289475-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:51:49.315364 master-0 kubenswrapper[27820]: I0320 08:51:49.315292 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_9e018a2b-849e-44fc-a457-169804289475/installer/0.log" Mar 20 08:51:49.316036 master-0 kubenswrapper[27820]: I0320 08:51:49.315371 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"9e018a2b-849e-44fc-a457-169804289475","Type":"ContainerDied","Data":"c472b2ca43f1efde41d53da747fe04697ea1a8cf9cb597301f16b18bd3db8622"} Mar 20 08:51:49.316036 master-0 kubenswrapper[27820]: I0320 08:51:49.315428 27820 scope.go:117] "RemoveContainer" containerID="39dceb61eacdd602aa8fdff99152b3f84b543720874d7be4e51bcd0dcea55336" Mar 20 08:51:49.316036 master-0 kubenswrapper[27820]: I0320 08:51:49.315460 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 20 08:51:49.356640 master-0 kubenswrapper[27820]: I0320 08:51:49.356533 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 20 08:51:49.372486 master-0 kubenswrapper[27820]: I0320 08:51:49.372422 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 20 08:51:50.090500 master-0 kubenswrapper[27820]: I0320 08:51:50.090403 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e018a2b-849e-44fc-a457-169804289475" path="/var/lib/kubelet/pods/9e018a2b-849e-44fc-a457-169804289475/volumes" Mar 20 08:52:26.030370 master-0 kubenswrapper[27820]: I0320 08:52:26.030309 27820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:52:26.030992 master-0 kubenswrapper[27820]: E0320 08:52:26.030602 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e018a2b-849e-44fc-a457-169804289475" containerName="installer" Mar 20 08:52:26.030992 master-0 kubenswrapper[27820]: I0320 08:52:26.030616 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e018a2b-849e-44fc-a457-169804289475" containerName="installer" Mar 20 08:52:26.030992 master-0 kubenswrapper[27820]: I0320 08:52:26.030737 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e018a2b-849e-44fc-a457-169804289475" containerName="installer" Mar 20 08:52:26.031192 master-0 kubenswrapper[27820]: I0320 08:52:26.031165 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.035850 master-0 kubenswrapper[27820]: I0320 08:52:26.035787 27820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 20 08:52:26.035927 master-0 kubenswrapper[27820]: I0320 08:52:26.035897 27820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 20 08:52:26.036299 master-0 kubenswrapper[27820]: I0320 08:52:26.036228 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" containerID="cri-o://9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f" gracePeriod=15 Mar 20 08:52:26.036351 master-0 kubenswrapper[27820]: I0320 08:52:26.036318 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed" gracePeriod=15 Mar 20 08:52:26.036401 master-0 kubenswrapper[27820]: E0320 08:52:26.036362 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 08:52:26.036401 master-0 kubenswrapper[27820]: I0320 08:52:26.036382 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 08:52:26.036401 master-0 kubenswrapper[27820]: E0320 08:52:26.036396 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: I0320 08:52:26.036405 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="setup" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: E0320 08:52:26.036417 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: I0320 08:52:26.036422 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: E0320 08:52:26.036447 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: I0320 08:52:26.036453 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: E0320 08:52:26.036467 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: I0320 08:52:26.036473 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: E0320 08:52:26.036483 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 20 08:52:26.036487 master-0 kubenswrapper[27820]: I0320 08:52:26.036489 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: E0320 08:52:26.036498 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036504 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036614 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036623 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036635 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036642 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-insecure-readyz" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036657 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036668 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036663 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-syncer" containerID="cri-o://51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c" gracePeriod=15 Mar 20 08:52:26.036757 master-0 kubenswrapper[27820]: I0320 08:52:26.036240 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca" gracePeriod=15 Mar 20 08:52:26.037914 master-0 kubenswrapper[27820]: I0320 08:52:26.037452 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd" gracePeriod=15 Mar 20 08:52:26.123918 master-0 kubenswrapper[27820]: I0320 08:52:26.123681 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.123918 master-0 kubenswrapper[27820]: I0320 08:52:26.123768 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.123918 master-0 kubenswrapper[27820]: I0320 08:52:26.123888 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.124086 master-0 kubenswrapper[27820]: I0320 08:52:26.123945 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.124086 master-0 kubenswrapper[27820]: I0320 08:52:26.123976 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.124086 master-0 kubenswrapper[27820]: I0320 08:52:26.124032 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.124086 master-0 kubenswrapper[27820]: I0320 08:52:26.124078 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.124208 master-0 kubenswrapper[27820]: I0320 08:52:26.124101 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.224574 master-0 kubenswrapper[27820]: I0320 08:52:26.224504 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.225552 master-0 kubenswrapper[27820]: I0320 08:52:26.224614 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.225634 master-0 kubenswrapper[27820]: I0320 08:52:26.224798 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.225690 master-0 kubenswrapper[27820]: I0320 08:52:26.224568 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.225823 master-0 kubenswrapper[27820]: I0320 08:52:26.225789 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.225879 master-0 kubenswrapper[27820]: I0320 08:52:26.225855 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.225925 master-0 kubenswrapper[27820]: I0320 08:52:26.225907 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.225971 master-0 kubenswrapper[27820]: I0320 08:52:26.225944 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.226044 master-0 kubenswrapper[27820]: I0320 08:52:26.226024 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.226093 master-0 kubenswrapper[27820]: I0320 08:52:26.226065 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.226367 master-0 kubenswrapper[27820]: I0320 08:52:26.226331 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.226435 master-0 kubenswrapper[27820]: I0320 08:52:26.226393 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.226481 master-0 kubenswrapper[27820]: I0320 08:52:26.226433 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/d5f502b117c7c8479f7f20848a50fec0-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"d5f502b117c7c8479f7f20848a50fec0\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:26.226532 master-0 kubenswrapper[27820]: I0320 08:52:26.226476 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.226532 master-0 kubenswrapper[27820]: I0320 08:52:26.226516 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.226619 master-0 kubenswrapper[27820]: I0320 08:52:26.226562 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:26.683094 master-0 kubenswrapper[27820]: I0320 08:52:26.683024 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-check-endpoints/0.log" Mar 20 08:52:26.684700 master-0 kubenswrapper[27820]: I0320 08:52:26.684649 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 20 08:52:26.685441 master-0 kubenswrapper[27820]: I0320 08:52:26.685386 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca" exitCode=0 Mar 20 08:52:26.685441 master-0 kubenswrapper[27820]: I0320 08:52:26.685422 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed" exitCode=0 Mar 20 08:52:26.685441 master-0 kubenswrapper[27820]: I0320 08:52:26.685432 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd" exitCode=0 Mar 20 08:52:26.685441 master-0 kubenswrapper[27820]: I0320 08:52:26.685441 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c" exitCode=2 Mar 20 08:52:26.685888 master-0 kubenswrapper[27820]: I0320 08:52:26.685494 27820 scope.go:117] "RemoveContainer" containerID="4af0d14e2080acfab9b4be1c21f5c397bb2b57510a3ab1d14b3ae883125de902" Mar 20 08:52:27.699126 master-0 kubenswrapper[27820]: I0320 08:52:27.699084 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 20 08:52:31.128234 master-0 kubenswrapper[27820]: E0320 08:52:31.128146 27820 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:31.129145 master-0 kubenswrapper[27820]: I0320 08:52:31.128763 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:31.164935 master-0 kubenswrapper[27820]: W0320 08:52:31.164858 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85632c1cec8974aa874834e4cfff4c77.slice/crio-e9980d7a3ee0d2645d9f92d8e78e630a6a38e60626d526b70b0f5a6d898ca606 WatchSource:0}: Error finding container e9980d7a3ee0d2645d9f92d8e78e630a6a38e60626d526b70b0f5a6d898ca606: Status 404 returned error can't find the container with id e9980d7a3ee0d2645d9f92d8e78e630a6a38e60626d526b70b0f5a6d898ca606 Mar 20 08:52:31.169012 master-0 kubenswrapper[27820]: E0320 08:52:31.168794 27820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e80a56a0bc3aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:52:31.167669162 +0000 UTC m=+161.262878336,LastTimestamp:2026-03-20 08:52:31.167669162 +0000 UTC m=+161.262878336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:52:31.734114 master-0 kubenswrapper[27820]: I0320 08:52:31.733908 27820 generic.go:334] "Generic (PLEG): container finished" podID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" containerID="870faed07a22d4978e0ed50111cb06dde9ad2c0dca51e752f685fc10dd88324d" exitCode=0 Mar 20 08:52:31.734114 master-0 kubenswrapper[27820]: I0320 08:52:31.734008 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"78ae02f0-5d31-4fda-a63a-534f60df5d1f","Type":"ContainerDied","Data":"870faed07a22d4978e0ed50111cb06dde9ad2c0dca51e752f685fc10dd88324d"} Mar 20 08:52:31.735758 master-0 kubenswrapper[27820]: I0320 08:52:31.735668 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:31.737061 master-0 kubenswrapper[27820]: I0320 08:52:31.736955 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d"} Mar 20 08:52:31.737061 master-0 kubenswrapper[27820]: I0320 08:52:31.737027 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"85632c1cec8974aa874834e4cfff4c77","Type":"ContainerStarted","Data":"e9980d7a3ee0d2645d9f92d8e78e630a6a38e60626d526b70b0f5a6d898ca606"} Mar 20 08:52:31.738634 master-0 kubenswrapper[27820]: E0320 08:52:31.738450 27820 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:52:31.738634 master-0 kubenswrapper[27820]: I0320 08:52:31.738566 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.164386 master-0 kubenswrapper[27820]: I0320 08:52:33.164332 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:52:33.166794 master-0 kubenswrapper[27820]: I0320 08:52:33.166738 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266118 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kubelet-dir\") pod \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266214 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kube-api-access\") pod \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266241 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "78ae02f0-5d31-4fda-a63a-534f60df5d1f" (UID: "78ae02f0-5d31-4fda-a63a-534f60df5d1f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266289 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-var-lock\") pod \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\" (UID: \"78ae02f0-5d31-4fda-a63a-534f60df5d1f\") " Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266331 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-var-lock" (OuterVolumeSpecName: "var-lock") pod "78ae02f0-5d31-4fda-a63a-534f60df5d1f" (UID: "78ae02f0-5d31-4fda-a63a-534f60df5d1f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266837 27820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:52:33.268347 master-0 kubenswrapper[27820]: I0320 08:52:33.266858 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/78ae02f0-5d31-4fda-a63a-534f60df5d1f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:52:33.269100 master-0 kubenswrapper[27820]: I0320 08:52:33.269031 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "78ae02f0-5d31-4fda-a63a-534f60df5d1f" (UID: "78ae02f0-5d31-4fda-a63a-534f60df5d1f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:33.368349 master-0 kubenswrapper[27820]: I0320 08:52:33.368134 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/78ae02f0-5d31-4fda-a63a-534f60df5d1f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:52:33.408957 master-0 kubenswrapper[27820]: I0320 08:52:33.408904 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 20 08:52:33.409721 master-0 kubenswrapper[27820]: I0320 08:52:33.409681 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:33.411878 master-0 kubenswrapper[27820]: I0320 08:52:33.411820 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.412570 master-0 kubenswrapper[27820]: I0320 08:52:33.412517 27820 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.570683 master-0 kubenswrapper[27820]: I0320 08:52:33.570537 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 20 08:52:33.570683 master-0 kubenswrapper[27820]: I0320 08:52:33.570631 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 20 08:52:33.570683 master-0 kubenswrapper[27820]: I0320 08:52:33.570656 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") pod \"b45ea2ef1cf2bc9d1d994d6538ae0a64\" (UID: \"b45ea2ef1cf2bc9d1d994d6538ae0a64\") " Mar 20 08:52:33.571161 master-0 kubenswrapper[27820]: I0320 08:52:33.571093 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:52:33.571364 master-0 kubenswrapper[27820]: I0320 08:52:33.571077 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:52:33.571364 master-0 kubenswrapper[27820]: I0320 08:52:33.571155 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b45ea2ef1cf2bc9d1d994d6538ae0a64" (UID: "b45ea2ef1cf2bc9d1d994d6538ae0a64"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:52:33.672740 master-0 kubenswrapper[27820]: I0320 08:52:33.672641 27820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:52:33.672740 master-0 kubenswrapper[27820]: I0320 08:52:33.672707 27820 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:52:33.672740 master-0 kubenswrapper[27820]: I0320 08:52:33.672726 27820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/b45ea2ef1cf2bc9d1d994d6538ae0a64-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:52:33.758024 master-0 kubenswrapper[27820]: I0320 08:52:33.757956 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 20 08:52:33.758221 master-0 kubenswrapper[27820]: I0320 08:52:33.757995 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"78ae02f0-5d31-4fda-a63a-534f60df5d1f","Type":"ContainerDied","Data":"4a237f516af96060c766d77c95a19c46f9ec0e542ddf09d3d4a7b89e62d71fa6"} Mar 20 08:52:33.758221 master-0 kubenswrapper[27820]: I0320 08:52:33.758095 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a237f516af96060c766d77c95a19c46f9ec0e542ddf09d3d4a7b89e62d71fa6" Mar 20 08:52:33.762023 master-0 kubenswrapper[27820]: I0320 08:52:33.761974 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_b45ea2ef1cf2bc9d1d994d6538ae0a64/kube-apiserver-cert-syncer/0.log" Mar 20 08:52:33.763019 master-0 kubenswrapper[27820]: I0320 08:52:33.762969 27820 generic.go:334] "Generic (PLEG): container finished" podID="b45ea2ef1cf2bc9d1d994d6538ae0a64" containerID="9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f" exitCode=0 Mar 20 08:52:33.763094 master-0 kubenswrapper[27820]: I0320 08:52:33.763058 27820 scope.go:117] "RemoveContainer" containerID="8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca" Mar 20 08:52:33.763181 master-0 kubenswrapper[27820]: I0320 08:52:33.763130 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:33.775409 master-0 kubenswrapper[27820]: I0320 08:52:33.775350 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:52:33.775663 master-0 kubenswrapper[27820]: E0320 08:52:33.775630 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:52:33.775712 master-0 kubenswrapper[27820]: E0320 08:52:33.775673 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:52:33.775778 master-0 kubenswrapper[27820]: E0320 08:52:33.775747 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:54:35.775714266 +0000 UTC m=+285.870923420 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:52:33.783856 master-0 kubenswrapper[27820]: I0320 08:52:33.783782 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.784613 master-0 kubenswrapper[27820]: I0320 08:52:33.784563 27820 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.795755 master-0 kubenswrapper[27820]: I0320 08:52:33.795698 27820 scope.go:117] "RemoveContainer" containerID="4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed" Mar 20 08:52:33.802360 master-0 kubenswrapper[27820]: I0320 08:52:33.802299 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.802958 master-0 kubenswrapper[27820]: I0320 08:52:33.802916 27820 status_manager.go:851] "Failed to get status for pod" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.819094 master-0 kubenswrapper[27820]: I0320 08:52:33.819067 27820 scope.go:117] "RemoveContainer" containerID="1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd" Mar 20 08:52:33.837927 master-0 kubenswrapper[27820]: I0320 08:52:33.837875 27820 scope.go:117] "RemoveContainer" containerID="51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c" Mar 20 08:52:33.866821 master-0 kubenswrapper[27820]: I0320 08:52:33.866784 27820 scope.go:117] "RemoveContainer" containerID="9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f" Mar 20 08:52:33.891431 master-0 kubenswrapper[27820]: I0320 08:52:33.891375 27820 scope.go:117] "RemoveContainer" containerID="019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e" Mar 20 08:52:33.919660 master-0 kubenswrapper[27820]: I0320 08:52:33.919609 27820 scope.go:117] "RemoveContainer" containerID="8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca" Mar 20 08:52:33.920064 master-0 kubenswrapper[27820]: E0320 08:52:33.920029 27820 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:52:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:52:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:52:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T08:52:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.920189 master-0 kubenswrapper[27820]: E0320 08:52:33.920051 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca\": container with ID starting with 8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca not found: ID does not exist" containerID="8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca" Mar 20 08:52:33.920326 master-0 kubenswrapper[27820]: I0320 08:52:33.920290 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca"} err="failed to get container status \"8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca\": rpc error: code = NotFound desc = could not find container \"8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca\": container with ID starting with 8aa7f076c78c158fb353870ba03c7a822914a599c1508dedb9462be2e93e60ca not found: ID does not exist" Mar 20 08:52:33.920427 master-0 kubenswrapper[27820]: I0320 08:52:33.920411 27820 scope.go:117] "RemoveContainer" containerID="4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed" Mar 20 08:52:33.920908 master-0 kubenswrapper[27820]: E0320 08:52:33.920859 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed\": container with ID starting with 4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed not found: ID does not exist" containerID="4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed" Mar 20 08:52:33.921098 master-0 kubenswrapper[27820]: I0320 08:52:33.921028 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed"} err="failed to get container status \"4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed\": rpc error: code = NotFound desc = could not find container \"4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed\": container with ID starting with 4d2ab3e2e2aad56bc3b150570678f8cd6a92538b5995dd1769b45ca1ca4842ed not found: ID does not exist" Mar 20 08:52:33.921164 master-0 kubenswrapper[27820]: I0320 08:52:33.921095 27820 scope.go:117] "RemoveContainer" containerID="1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd" Mar 20 08:52:33.921302 master-0 kubenswrapper[27820]: E0320 08:52:33.921240 27820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.921457 master-0 kubenswrapper[27820]: E0320 08:52:33.921425 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd\": container with ID starting with 1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd not found: ID does not exist" containerID="1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd" Mar 20 08:52:33.921524 master-0 kubenswrapper[27820]: I0320 08:52:33.921460 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd"} err="failed to get container status \"1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd\": rpc error: code = NotFound desc = could not find container \"1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd\": container with ID starting with 1465b5735787fe0ebf93f35e9b67fbaf114c0045019bf4f5aac3e0ae5fa40cfd not found: ID does not exist" Mar 20 08:52:33.921524 master-0 kubenswrapper[27820]: I0320 08:52:33.921480 27820 scope.go:117] "RemoveContainer" containerID="51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c" Mar 20 08:52:33.921920 master-0 kubenswrapper[27820]: E0320 08:52:33.921892 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c\": container with ID starting with 51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c not found: ID does not exist" containerID="51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c" Mar 20 08:52:33.922032 master-0 kubenswrapper[27820]: I0320 08:52:33.922008 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c"} err="failed to get container status \"51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c\": rpc error: code = NotFound desc = could not find container \"51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c\": container with ID starting with 51f1e4dcfe0151d1f4fbba6c863fbca95761008bf8818dbf6e90c138860dd17c not found: ID does not exist" Mar 20 08:52:33.922120 master-0 kubenswrapper[27820]: I0320 08:52:33.922105 27820 scope.go:117] "RemoveContainer" containerID="9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f" Mar 20 08:52:33.923615 master-0 kubenswrapper[27820]: E0320 08:52:33.923587 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f\": container with ID starting with 9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f not found: ID does not exist" containerID="9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f" Mar 20 08:52:33.923696 master-0 kubenswrapper[27820]: I0320 08:52:33.923614 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f"} err="failed to get container status \"9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f\": rpc error: code = NotFound desc = could not find container \"9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f\": container with ID starting with 9dadbe10b12b1a20a0d3ab6712abcd8f210f2db7035cefe893d40772ad30265f not found: ID does not exist" Mar 20 08:52:33.923696 master-0 kubenswrapper[27820]: I0320 08:52:33.923633 27820 scope.go:117] "RemoveContainer" containerID="019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e" Mar 20 08:52:33.923882 master-0 kubenswrapper[27820]: E0320 08:52:33.923841 27820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.923984 master-0 kubenswrapper[27820]: E0320 08:52:33.923944 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e\": container with ID starting with 019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e not found: ID does not exist" containerID="019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e" Mar 20 08:52:33.924044 master-0 kubenswrapper[27820]: I0320 08:52:33.923978 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e"} err="failed to get container status \"019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e\": rpc error: code = NotFound desc = could not find container \"019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e\": container with ID starting with 019791132b7e834bce74e8041e55c6a38294bf9b8b3e87df50daf0d5b306fb4e not found: ID does not exist" Mar 20 08:52:33.924607 master-0 kubenswrapper[27820]: E0320 08:52:33.924575 27820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.926138 master-0 kubenswrapper[27820]: E0320 08:52:33.926021 27820 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:33.926138 master-0 kubenswrapper[27820]: E0320 08:52:33.926064 27820 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 08:52:34.084949 master-0 kubenswrapper[27820]: I0320 08:52:34.084893 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b45ea2ef1cf2bc9d1d994d6538ae0a64" path="/var/lib/kubelet/pods/b45ea2ef1cf2bc9d1d994d6538ae0a64/volumes" Mar 20 08:52:34.273493 master-0 kubenswrapper[27820]: E0320 08:52:34.272938 27820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189e80a56a0bc3aa openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:85632c1cec8974aa874834e4cfff4c77,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c5ce3d1134d6500e2b8528516c1889d7bbc6259aba4981c6983395b0e9eeff65\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-20 08:52:31.167669162 +0000 UTC m=+161.262878336,LastTimestamp:2026-03-20 08:52:31.167669162 +0000 UTC m=+161.262878336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 20 08:52:35.090447 master-0 kubenswrapper[27820]: E0320 08:52:35.090373 27820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:35.091209 master-0 kubenswrapper[27820]: E0320 08:52:35.091151 27820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:35.092035 master-0 kubenswrapper[27820]: E0320 08:52:35.091999 27820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:35.093220 master-0 kubenswrapper[27820]: E0320 08:52:35.093135 27820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:35.094178 master-0 kubenswrapper[27820]: E0320 08:52:35.094108 27820 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:35.094390 master-0 kubenswrapper[27820]: I0320 08:52:35.094297 27820 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 08:52:35.095051 master-0 kubenswrapper[27820]: E0320 08:52:35.095004 27820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 20 08:52:35.296421 master-0 kubenswrapper[27820]: E0320 08:52:35.296330 27820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 20 08:52:35.698970 master-0 kubenswrapper[27820]: E0320 08:52:35.698857 27820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 20 08:52:36.501111 master-0 kubenswrapper[27820]: E0320 08:52:36.501045 27820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 20 08:52:38.106486 master-0 kubenswrapper[27820]: E0320 08:52:38.104301 27820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 20 08:52:39.074956 master-0 kubenswrapper[27820]: I0320 08:52:39.074778 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:39.077903 master-0 kubenswrapper[27820]: I0320 08:52:39.077818 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:39.110774 master-0 kubenswrapper[27820]: I0320 08:52:39.110693 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:39.110774 master-0 kubenswrapper[27820]: I0320 08:52:39.110756 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:39.111932 master-0 kubenswrapper[27820]: E0320 08:52:39.111855 27820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:39.112766 master-0 kubenswrapper[27820]: I0320 08:52:39.112698 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:39.148857 master-0 kubenswrapper[27820]: W0320 08:52:39.147773 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5f502b117c7c8479f7f20848a50fec0.slice/crio-7110d4c8b6268743be72648931129ec42b4f2f5fe4acc316543c4ad50cd2d8ac WatchSource:0}: Error finding container 7110d4c8b6268743be72648931129ec42b4f2f5fe4acc316543c4ad50cd2d8ac: Status 404 returned error can't find the container with id 7110d4c8b6268743be72648931129ec42b4f2f5fe4acc316543c4ad50cd2d8ac Mar 20 08:52:39.816900 master-0 kubenswrapper[27820]: I0320 08:52:39.816826 27820 generic.go:334] "Generic (PLEG): container finished" podID="d5f502b117c7c8479f7f20848a50fec0" containerID="8a99c1f1ef23cb35a45dfce1f4c6a7e97fa8167be78101cb828ed664df9ebf75" exitCode=0 Mar 20 08:52:39.816900 master-0 kubenswrapper[27820]: I0320 08:52:39.816892 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerDied","Data":"8a99c1f1ef23cb35a45dfce1f4c6a7e97fa8167be78101cb828ed664df9ebf75"} Mar 20 08:52:39.817316 master-0 kubenswrapper[27820]: I0320 08:52:39.816941 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"7110d4c8b6268743be72648931129ec42b4f2f5fe4acc316543c4ad50cd2d8ac"} Mar 20 08:52:39.817402 master-0 kubenswrapper[27820]: I0320 08:52:39.817380 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:39.817468 master-0 kubenswrapper[27820]: I0320 08:52:39.817403 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:39.818797 master-0 kubenswrapper[27820]: I0320 08:52:39.818365 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:39.818797 master-0 kubenswrapper[27820]: E0320 08:52:39.818384 27820 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:40.083954 master-0 kubenswrapper[27820]: I0320 08:52:40.083790 27820 status_manager.go:851] "Failed to get status for pod" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:40.084788 master-0 kubenswrapper[27820]: I0320 08:52:40.084725 27820 status_manager.go:851] "Failed to get status for pod" podUID="d5f502b117c7c8479f7f20848a50fec0" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 20 08:52:40.841670 master-0 kubenswrapper[27820]: I0320 08:52:40.841612 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"a9f4ca70a7f269f28987ee3e2d5ad7ccc02b260f88079badd702aaa45bd119cc"} Mar 20 08:52:40.841670 master-0 kubenswrapper[27820]: I0320 08:52:40.841668 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"8b470b721ec004d05b4f44459af2b2a1208d38f6fc12a4f620e2f090722a7e48"} Mar 20 08:52:40.842303 master-0 kubenswrapper[27820]: I0320 08:52:40.841687 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"42c3089cba876ce49b76726f7e55e4003c2bd517eb7f7d04fd474e3a8ac5121d"} Mar 20 08:52:41.851063 master-0 kubenswrapper[27820]: I0320 08:52:41.850990 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"77639f8786732a93f271119fe29121af2110b1a183b7793395deed1af6763c4a"} Mar 20 08:52:41.851644 master-0 kubenswrapper[27820]: I0320 08:52:41.851068 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"d5f502b117c7c8479f7f20848a50fec0","Type":"ContainerStarted","Data":"ae41e24aa2136e3c818dc66e1abcbaa0084fb057f96688c6bfce4600d8d8e98c"} Mar 20 08:52:41.851644 master-0 kubenswrapper[27820]: I0320 08:52:41.851282 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:41.851644 master-0 kubenswrapper[27820]: I0320 08:52:41.851298 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:41.851644 master-0 kubenswrapper[27820]: I0320 08:52:41.851307 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:41.853985 master-0 kubenswrapper[27820]: I0320 08:52:41.853947 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager/0.log" Mar 20 08:52:41.854057 master-0 kubenswrapper[27820]: I0320 08:52:41.853994 27820 generic.go:334] "Generic (PLEG): container finished" podID="36f4a012744c6465102d09cc67ac63e6" containerID="6290d119083ea809be21ab579813fa286464b690250eaa07fa0794bcdde38d59" exitCode=1 Mar 20 08:52:41.854057 master-0 kubenswrapper[27820]: I0320 08:52:41.854022 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerDied","Data":"6290d119083ea809be21ab579813fa286464b690250eaa07fa0794bcdde38d59"} Mar 20 08:52:41.854503 master-0 kubenswrapper[27820]: I0320 08:52:41.854478 27820 scope.go:117] "RemoveContainer" containerID="6290d119083ea809be21ab579813fa286464b690250eaa07fa0794bcdde38d59" Mar 20 08:52:42.862759 master-0 kubenswrapper[27820]: I0320 08:52:42.862698 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager/0.log" Mar 20 08:52:42.863347 master-0 kubenswrapper[27820]: I0320 08:52:42.862800 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"bf07c7b6cfd6872c0d4b0145eea2b021b2c13119eb8e210c245ed1f17613dab1"} Mar 20 08:52:44.114495 master-0 kubenswrapper[27820]: I0320 08:52:44.114436 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:44.115192 master-0 kubenswrapper[27820]: I0320 08:52:44.114618 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:44.123935 master-0 kubenswrapper[27820]: I0320 08:52:44.123841 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:44.473728 master-0 kubenswrapper[27820]: I0320 08:52:44.473546 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:52:44.473728 master-0 kubenswrapper[27820]: I0320 08:52:44.473646 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:52:44.473983 master-0 kubenswrapper[27820]: I0320 08:52:44.473928 27820 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 20 08:52:44.474037 master-0 kubenswrapper[27820]: I0320 08:52:44.473983 27820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 20 08:52:47.111122 master-0 kubenswrapper[27820]: I0320 08:52:47.111065 27820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:47.899773 master-0 kubenswrapper[27820]: I0320 08:52:47.899677 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:47.899773 master-0 kubenswrapper[27820]: I0320 08:52:47.899731 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:47.906707 master-0 kubenswrapper[27820]: I0320 08:52:47.906645 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:48.051850 master-0 kubenswrapper[27820]: I0320 08:52:48.051733 27820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="b833f228-4bd2-406e-8388-cd4133ea75a9" Mar 20 08:52:48.906761 master-0 kubenswrapper[27820]: I0320 08:52:48.906706 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:48.906761 master-0 kubenswrapper[27820]: I0320 08:52:48.906740 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="f51cc77e-b3e3-403b-8704-1e72c94d715e" Mar 20 08:52:48.911643 master-0 kubenswrapper[27820]: I0320 08:52:48.911567 27820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="d5f502b117c7c8479f7f20848a50fec0" podUID="b833f228-4bd2-406e-8388-cd4133ea75a9" Mar 20 08:52:50.236472 master-0 kubenswrapper[27820]: I0320 08:52:50.236382 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 08:52:50.529033 master-0 kubenswrapper[27820]: I0320 08:52:50.528795 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 20 08:52:50.655122 master-0 kubenswrapper[27820]: I0320 08:52:50.655057 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 08:52:50.790687 master-0 kubenswrapper[27820]: I0320 08:52:50.790517 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 08:52:51.158067 master-0 kubenswrapper[27820]: I0320 08:52:51.158007 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-svsqb" Mar 20 08:52:51.205770 master-0 kubenswrapper[27820]: I0320 08:52:51.205672 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 08:52:51.283471 master-0 kubenswrapper[27820]: I0320 08:52:51.283400 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 08:52:51.363404 master-0 kubenswrapper[27820]: I0320 08:52:51.363348 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 08:52:51.387072 master-0 kubenswrapper[27820]: I0320 08:52:51.387011 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 08:52:51.956600 master-0 kubenswrapper[27820]: I0320 08:52:51.956535 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 20 08:52:52.028337 master-0 kubenswrapper[27820]: I0320 08:52:52.028295 27820 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 08:52:52.232220 master-0 kubenswrapper[27820]: I0320 08:52:52.232103 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 08:52:52.320243 master-0 kubenswrapper[27820]: I0320 08:52:52.320191 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-ghpsv" Mar 20 08:52:52.397139 master-0 kubenswrapper[27820]: I0320 08:52:52.397060 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 20 08:52:52.555411 master-0 kubenswrapper[27820]: I0320 08:52:52.555221 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-8kh8p" Mar 20 08:52:52.663246 master-0 kubenswrapper[27820]: I0320 08:52:52.663160 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 08:52:53.046172 master-0 kubenswrapper[27820]: I0320 08:52:53.046116 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 08:52:53.047970 master-0 kubenswrapper[27820]: I0320 08:52:53.047945 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 20 08:52:53.098790 master-0 kubenswrapper[27820]: I0320 08:52:53.098742 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 20 08:52:53.155113 master-0 kubenswrapper[27820]: I0320 08:52:53.155052 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 08:52:53.169646 master-0 kubenswrapper[27820]: I0320 08:52:53.169563 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 08:52:53.260684 master-0 kubenswrapper[27820]: I0320 08:52:53.260627 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:52:53.275415 master-0 kubenswrapper[27820]: I0320 08:52:53.275383 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 20 08:52:53.286831 master-0 kubenswrapper[27820]: I0320 08:52:53.286780 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 08:52:53.307036 master-0 kubenswrapper[27820]: I0320 08:52:53.306942 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 08:52:53.332404 master-0 kubenswrapper[27820]: I0320 08:52:53.332342 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 08:52:53.387602 master-0 kubenswrapper[27820]: I0320 08:52:53.387541 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 08:52:53.396481 master-0 kubenswrapper[27820]: I0320 08:52:53.396448 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 08:52:53.421676 master-0 kubenswrapper[27820]: I0320 08:52:53.421629 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 08:52:53.461510 master-0 kubenswrapper[27820]: I0320 08:52:53.461433 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 08:52:53.531002 master-0 kubenswrapper[27820]: I0320 08:52:53.530964 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 08:52:53.553530 master-0 kubenswrapper[27820]: I0320 08:52:53.553482 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 08:52:53.590252 master-0 kubenswrapper[27820]: I0320 08:52:53.590157 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 20 08:52:53.628044 master-0 kubenswrapper[27820]: I0320 08:52:53.627976 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-qgtl7" Mar 20 08:52:53.672823 master-0 kubenswrapper[27820]: I0320 08:52:53.672749 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-2vc5h" Mar 20 08:52:53.678549 master-0 kubenswrapper[27820]: I0320 08:52:53.678478 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 08:52:53.734072 master-0 kubenswrapper[27820]: I0320 08:52:53.734007 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 08:52:53.741347 master-0 kubenswrapper[27820]: I0320 08:52:53.741215 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 08:52:53.819445 master-0 kubenswrapper[27820]: I0320 08:52:53.819373 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 08:52:53.828490 master-0 kubenswrapper[27820]: I0320 08:52:53.828410 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-wsbtn" Mar 20 08:52:53.893037 master-0 kubenswrapper[27820]: I0320 08:52:53.892947 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 20 08:52:53.915614 master-0 kubenswrapper[27820]: I0320 08:52:53.915538 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 08:52:53.937558 master-0 kubenswrapper[27820]: I0320 08:52:53.937463 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 20 08:52:53.959038 master-0 kubenswrapper[27820]: I0320 08:52:53.958940 27820 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 08:52:53.960401 master-0 kubenswrapper[27820]: I0320 08:52:53.960344 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 08:52:53.961402 master-0 kubenswrapper[27820]: I0320 08:52:53.961364 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-flatfile-config" Mar 20 08:52:54.013972 master-0 kubenswrapper[27820]: I0320 08:52:54.013876 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 08:52:54.044661 master-0 kubenswrapper[27820]: I0320 08:52:54.044597 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 08:52:54.050860 master-0 kubenswrapper[27820]: I0320 08:52:54.050810 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 08:52:54.114387 master-0 kubenswrapper[27820]: I0320 08:52:54.114240 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 08:52:54.141984 master-0 kubenswrapper[27820]: I0320 08:52:54.141898 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 08:52:54.153298 master-0 kubenswrapper[27820]: I0320 08:52:54.153147 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 20 08:52:54.194380 master-0 kubenswrapper[27820]: I0320 08:52:54.194296 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-cxlgs" Mar 20 08:52:54.218346 master-0 kubenswrapper[27820]: I0320 08:52:54.218248 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-vkksc" Mar 20 08:52:54.261071 master-0 kubenswrapper[27820]: I0320 08:52:54.260962 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 20 08:52:54.269210 master-0 kubenswrapper[27820]: I0320 08:52:54.269158 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 08:52:54.288295 master-0 kubenswrapper[27820]: I0320 08:52:54.288231 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 08:52:54.305971 master-0 kubenswrapper[27820]: I0320 08:52:54.305892 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 08:52:54.313533 master-0 kubenswrapper[27820]: I0320 08:52:54.313480 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 20 08:52:54.334997 master-0 kubenswrapper[27820]: I0320 08:52:54.334936 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 08:52:54.439013 master-0 kubenswrapper[27820]: I0320 08:52:54.438884 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 08:52:54.451309 master-0 kubenswrapper[27820]: I0320 08:52:54.451153 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 08:52:54.468735 master-0 kubenswrapper[27820]: I0320 08:52:54.468661 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-mqln7" Mar 20 08:52:54.473681 master-0 kubenswrapper[27820]: I0320 08:52:54.473601 27820 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 20 08:52:54.473846 master-0 kubenswrapper[27820]: I0320 08:52:54.473710 27820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 20 08:52:54.477131 master-0 kubenswrapper[27820]: I0320 08:52:54.477072 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 20 08:52:54.499519 master-0 kubenswrapper[27820]: I0320 08:52:54.499463 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:52:54.530886 master-0 kubenswrapper[27820]: I0320 08:52:54.530835 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-7i2lh8fo12r60" Mar 20 08:52:54.592627 master-0 kubenswrapper[27820]: I0320 08:52:54.592561 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-w58m2" Mar 20 08:52:54.597521 master-0 kubenswrapper[27820]: I0320 08:52:54.597493 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:52:54.597599 master-0 kubenswrapper[27820]: I0320 08:52:54.597577 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 08:52:54.598421 master-0 kubenswrapper[27820]: I0320 08:52:54.598380 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 08:52:54.623608 master-0 kubenswrapper[27820]: I0320 08:52:54.623532 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 08:52:54.654830 master-0 kubenswrapper[27820]: I0320 08:52:54.654775 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 20 08:52:54.664421 master-0 kubenswrapper[27820]: I0320 08:52:54.664386 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 08:52:54.688436 master-0 kubenswrapper[27820]: I0320 08:52:54.688381 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 08:52:54.711705 master-0 kubenswrapper[27820]: I0320 08:52:54.711591 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 08:52:54.727138 master-0 kubenswrapper[27820]: I0320 08:52:54.727088 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 08:52:54.741824 master-0 kubenswrapper[27820]: I0320 08:52:54.741782 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 20 08:52:54.744469 master-0 kubenswrapper[27820]: I0320 08:52:54.744451 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 08:52:54.813480 master-0 kubenswrapper[27820]: I0320 08:52:54.813410 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 08:52:54.843456 master-0 kubenswrapper[27820]: I0320 08:52:54.843372 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 08:52:54.875781 master-0 kubenswrapper[27820]: I0320 08:52:54.875717 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 08:52:54.966504 master-0 kubenswrapper[27820]: I0320 08:52:54.966386 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 20 08:52:55.157366 master-0 kubenswrapper[27820]: I0320 08:52:55.157246 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 08:52:55.202664 master-0 kubenswrapper[27820]: I0320 08:52:55.202603 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 08:52:55.214688 master-0 kubenswrapper[27820]: I0320 08:52:55.214627 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 08:52:55.247043 master-0 kubenswrapper[27820]: I0320 08:52:55.246876 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 08:52:55.273922 master-0 kubenswrapper[27820]: I0320 08:52:55.273861 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 08:52:55.289773 master-0 kubenswrapper[27820]: I0320 08:52:55.289690 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-fxvgv" Mar 20 08:52:55.332952 master-0 kubenswrapper[27820]: I0320 08:52:55.332896 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 20 08:52:55.343066 master-0 kubenswrapper[27820]: I0320 08:52:55.343006 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 20 08:52:55.392032 master-0 kubenswrapper[27820]: I0320 08:52:55.391973 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 08:52:55.413085 master-0 kubenswrapper[27820]: I0320 08:52:55.412995 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 08:52:55.499907 master-0 kubenswrapper[27820]: I0320 08:52:55.499753 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-kwmwv" Mar 20 08:52:55.518798 master-0 kubenswrapper[27820]: I0320 08:52:55.518675 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 08:52:55.549837 master-0 kubenswrapper[27820]: I0320 08:52:55.549782 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 08:52:55.587965 master-0 kubenswrapper[27820]: I0320 08:52:55.587875 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 08:52:55.606178 master-0 kubenswrapper[27820]: I0320 08:52:55.606099 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 08:52:55.621249 master-0 kubenswrapper[27820]: I0320 08:52:55.621105 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 08:52:55.671245 master-0 kubenswrapper[27820]: I0320 08:52:55.671170 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 20 08:52:55.690292 master-0 kubenswrapper[27820]: I0320 08:52:55.690221 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 08:52:55.707126 master-0 kubenswrapper[27820]: I0320 08:52:55.707073 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 08:52:55.710100 master-0 kubenswrapper[27820]: I0320 08:52:55.710072 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 20 08:52:55.711843 master-0 kubenswrapper[27820]: I0320 08:52:55.711777 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 20 08:52:55.712744 master-0 kubenswrapper[27820]: I0320 08:52:55.712692 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 08:52:55.757859 master-0 kubenswrapper[27820]: I0320 08:52:55.757436 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 20 08:52:55.769960 master-0 kubenswrapper[27820]: I0320 08:52:55.769897 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 08:52:55.791049 master-0 kubenswrapper[27820]: I0320 08:52:55.790963 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 20 08:52:55.825948 master-0 kubenswrapper[27820]: I0320 08:52:55.825856 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 20 08:52:55.827221 master-0 kubenswrapper[27820]: I0320 08:52:55.827174 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-6825q" Mar 20 08:52:55.833904 master-0 kubenswrapper[27820]: I0320 08:52:55.833865 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 08:52:55.855463 master-0 kubenswrapper[27820]: I0320 08:52:55.855413 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 08:52:55.863649 master-0 kubenswrapper[27820]: I0320 08:52:55.863585 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 20 08:52:55.868128 master-0 kubenswrapper[27820]: I0320 08:52:55.868088 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 08:52:55.873646 master-0 kubenswrapper[27820]: I0320 08:52:55.873608 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 20 08:52:55.906606 master-0 kubenswrapper[27820]: I0320 08:52:55.906563 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 08:52:55.948306 master-0 kubenswrapper[27820]: I0320 08:52:55.943324 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-zs2v5" Mar 20 08:52:55.948306 master-0 kubenswrapper[27820]: I0320 08:52:55.944561 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 08:52:55.976351 master-0 kubenswrapper[27820]: I0320 08:52:55.976298 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-rqpg6" Mar 20 08:52:56.003247 master-0 kubenswrapper[27820]: I0320 08:52:56.003159 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 08:52:56.061290 master-0 kubenswrapper[27820]: I0320 08:52:56.061073 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 20 08:52:56.122678 master-0 kubenswrapper[27820]: I0320 08:52:56.122631 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-tp6tv" Mar 20 08:52:56.137405 master-0 kubenswrapper[27820]: I0320 08:52:56.137374 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 08:52:56.149344 master-0 kubenswrapper[27820]: I0320 08:52:56.149298 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-dbtrl" Mar 20 08:52:56.198357 master-0 kubenswrapper[27820]: I0320 08:52:56.195449 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 08:52:56.206324 master-0 kubenswrapper[27820]: I0320 08:52:56.200755 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 08:52:56.261101 master-0 kubenswrapper[27820]: I0320 08:52:56.261047 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 08:52:56.270156 master-0 kubenswrapper[27820]: I0320 08:52:56.270099 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 20 08:52:56.274783 master-0 kubenswrapper[27820]: I0320 08:52:56.274735 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 08:52:56.332390 master-0 kubenswrapper[27820]: I0320 08:52:56.332196 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 08:52:56.458573 master-0 kubenswrapper[27820]: I0320 08:52:56.458483 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 08:52:56.496768 master-0 kubenswrapper[27820]: I0320 08:52:56.496681 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 08:52:56.519230 master-0 kubenswrapper[27820]: I0320 08:52:56.519184 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 08:52:56.536195 master-0 kubenswrapper[27820]: I0320 08:52:56.536115 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 08:52:56.548008 master-0 kubenswrapper[27820]: I0320 08:52:56.547955 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 20 08:52:56.572778 master-0 kubenswrapper[27820]: I0320 08:52:56.572736 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-2zrgl" Mar 20 08:52:56.592828 master-0 kubenswrapper[27820]: I0320 08:52:56.592782 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 08:52:56.593130 master-0 kubenswrapper[27820]: I0320 08:52:56.592854 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 08:52:56.630254 master-0 kubenswrapper[27820]: I0320 08:52:56.630186 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 20 08:52:56.662643 master-0 kubenswrapper[27820]: I0320 08:52:56.662612 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 08:52:56.711678 master-0 kubenswrapper[27820]: I0320 08:52:56.711579 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 08:52:56.759988 master-0 kubenswrapper[27820]: I0320 08:52:56.759885 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 20 08:52:56.806658 master-0 kubenswrapper[27820]: I0320 08:52:56.806550 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 08:52:56.815141 master-0 kubenswrapper[27820]: I0320 08:52:56.815069 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 08:52:56.937851 master-0 kubenswrapper[27820]: I0320 08:52:56.937594 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 08:52:56.956594 master-0 kubenswrapper[27820]: I0320 08:52:56.956478 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 08:52:56.973540 master-0 kubenswrapper[27820]: I0320 08:52:56.973462 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 08:52:56.984677 master-0 kubenswrapper[27820]: I0320 08:52:56.984620 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 08:52:56.997119 master-0 kubenswrapper[27820]: I0320 08:52:56.997082 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 08:52:57.025121 master-0 kubenswrapper[27820]: I0320 08:52:57.025066 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 20 08:52:57.073593 master-0 kubenswrapper[27820]: I0320 08:52:57.073500 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 08:52:57.092406 master-0 kubenswrapper[27820]: I0320 08:52:57.092331 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 08:52:57.165547 master-0 kubenswrapper[27820]: I0320 08:52:57.165426 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 08:52:57.249666 master-0 kubenswrapper[27820]: I0320 08:52:57.249445 27820 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 08:52:57.268308 master-0 kubenswrapper[27820]: I0320 08:52:57.268171 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 08:52:57.283869 master-0 kubenswrapper[27820]: I0320 08:52:57.283796 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 08:52:57.310585 master-0 kubenswrapper[27820]: I0320 08:52:57.310463 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 08:52:57.312314 master-0 kubenswrapper[27820]: I0320 08:52:57.312213 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 08:52:57.376855 master-0 kubenswrapper[27820]: I0320 08:52:57.376756 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 20 08:52:57.434238 master-0 kubenswrapper[27820]: I0320 08:52:57.434171 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-9mkkw" Mar 20 08:52:57.457152 master-0 kubenswrapper[27820]: I0320 08:52:57.457064 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 08:52:57.498727 master-0 kubenswrapper[27820]: I0320 08:52:57.498649 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 08:52:57.512338 master-0 kubenswrapper[27820]: I0320 08:52:57.512178 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 08:52:57.524691 master-0 kubenswrapper[27820]: I0320 08:52:57.524440 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 08:52:57.524691 master-0 kubenswrapper[27820]: I0320 08:52:57.524655 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 08:52:57.557419 master-0 kubenswrapper[27820]: I0320 08:52:57.557352 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 08:52:57.808604 master-0 kubenswrapper[27820]: I0320 08:52:57.808306 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 08:52:57.851521 master-0 kubenswrapper[27820]: I0320 08:52:57.851480 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 20 08:52:57.879230 master-0 kubenswrapper[27820]: I0320 08:52:57.879151 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 08:52:57.932790 master-0 kubenswrapper[27820]: I0320 08:52:57.932732 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-vlmsv" Mar 20 08:52:57.958668 master-0 kubenswrapper[27820]: I0320 08:52:57.958614 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 08:52:57.959858 master-0 kubenswrapper[27820]: I0320 08:52:57.959829 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 08:52:57.986092 master-0 kubenswrapper[27820]: I0320 08:52:57.986041 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:52:57.988443 master-0 kubenswrapper[27820]: I0320 08:52:57.988285 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 08:52:57.996377 master-0 kubenswrapper[27820]: I0320 08:52:57.996337 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 08:52:57.997340 master-0 kubenswrapper[27820]: I0320 08:52:57.997206 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 08:52:58.019365 master-0 kubenswrapper[27820]: I0320 08:52:58.019312 27820 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 08:52:58.026859 master-0 kubenswrapper[27820]: I0320 08:52:58.026778 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 20 08:52:58.027163 master-0 kubenswrapper[27820]: I0320 08:52:58.026868 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 20 08:52:58.032520 master-0 kubenswrapper[27820]: I0320 08:52:58.032473 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 20 08:52:58.049914 master-0 kubenswrapper[27820]: I0320 08:52:58.049835 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=11.049816861 podStartE2EDuration="11.049816861s" podCreationTimestamp="2026-03-20 08:52:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:58.0457271 +0000 UTC m=+188.140936254" watchObservedRunningTime="2026-03-20 08:52:58.049816861 +0000 UTC m=+188.145026015" Mar 20 08:52:58.071205 master-0 kubenswrapper[27820]: I0320 08:52:58.071025 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 08:52:58.080817 master-0 kubenswrapper[27820]: I0320 08:52:58.080752 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 08:52:58.146856 master-0 kubenswrapper[27820]: I0320 08:52:58.146778 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 08:52:58.230938 master-0 kubenswrapper[27820]: I0320 08:52:58.230844 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 08:52:58.249217 master-0 kubenswrapper[27820]: I0320 08:52:58.249119 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 20 08:52:58.291503 master-0 kubenswrapper[27820]: I0320 08:52:58.291258 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-dl9qh" Mar 20 08:52:58.305077 master-0 kubenswrapper[27820]: I0320 08:52:58.304992 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 08:52:58.343054 master-0 kubenswrapper[27820]: I0320 08:52:58.342995 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 20 08:52:58.415311 master-0 kubenswrapper[27820]: I0320 08:52:58.415175 27820 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 08:52:58.436700 master-0 kubenswrapper[27820]: I0320 08:52:58.436623 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 08:52:58.475062 master-0 kubenswrapper[27820]: I0320 08:52:58.474990 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 08:52:58.501546 master-0 kubenswrapper[27820]: I0320 08:52:58.501443 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 20 08:52:58.542602 master-0 kubenswrapper[27820]: I0320 08:52:58.542519 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-p2lrx" Mar 20 08:52:58.566087 master-0 kubenswrapper[27820]: I0320 08:52:58.566024 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 08:52:58.616118 master-0 kubenswrapper[27820]: I0320 08:52:58.615980 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 08:52:58.649386 master-0 kubenswrapper[27820]: I0320 08:52:58.649199 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 20 08:52:58.658529 master-0 kubenswrapper[27820]: I0320 08:52:58.658457 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 20 08:52:58.721568 master-0 kubenswrapper[27820]: I0320 08:52:58.721497 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 08:52:58.748763 master-0 kubenswrapper[27820]: I0320 08:52:58.748623 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 20 08:52:58.749660 master-0 kubenswrapper[27820]: I0320 08:52:58.749604 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 08:52:58.811945 master-0 kubenswrapper[27820]: I0320 08:52:58.811837 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 08:52:58.815669 master-0 kubenswrapper[27820]: I0320 08:52:58.815610 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 08:52:58.851875 master-0 kubenswrapper[27820]: I0320 08:52:58.851790 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 20 08:52:58.891210 master-0 kubenswrapper[27820]: I0320 08:52:58.891058 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 08:52:59.021118 master-0 kubenswrapper[27820]: I0320 08:52:59.021035 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 08:52:59.037665 master-0 kubenswrapper[27820]: I0320 08:52:59.037596 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 08:52:59.067281 master-0 kubenswrapper[27820]: I0320 08:52:59.067221 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 08:52:59.068312 master-0 kubenswrapper[27820]: I0320 08:52:59.068232 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-c9tw2" Mar 20 08:52:59.098803 master-0 kubenswrapper[27820]: I0320 08:52:59.098731 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 08:52:59.099544 master-0 kubenswrapper[27820]: I0320 08:52:59.099489 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 08:52:59.108121 master-0 kubenswrapper[27820]: I0320 08:52:59.108068 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 20 08:52:59.144792 master-0 kubenswrapper[27820]: I0320 08:52:59.144687 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 20 08:52:59.169146 master-0 kubenswrapper[27820]: I0320 08:52:59.169091 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 20 08:52:59.196304 master-0 kubenswrapper[27820]: I0320 08:52:59.196221 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 08:52:59.237677 master-0 kubenswrapper[27820]: I0320 08:52:59.237570 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 08:52:59.237677 master-0 kubenswrapper[27820]: I0320 08:52:59.237641 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 08:52:59.241122 master-0 kubenswrapper[27820]: I0320 08:52:59.241077 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 08:52:59.256873 master-0 kubenswrapper[27820]: I0320 08:52:59.256811 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 08:52:59.278219 master-0 kubenswrapper[27820]: I0320 08:52:59.277907 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 08:52:59.345641 master-0 kubenswrapper[27820]: I0320 08:52:59.345230 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 08:52:59.524851 master-0 kubenswrapper[27820]: I0320 08:52:59.524714 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 08:52:59.772746 master-0 kubenswrapper[27820]: I0320 08:52:59.772682 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-6n6jl" Mar 20 08:52:59.800470 master-0 kubenswrapper[27820]: I0320 08:52:59.800341 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 20 08:52:59.864815 master-0 kubenswrapper[27820]: I0320 08:52:59.864738 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 08:52:59.908499 master-0 kubenswrapper[27820]: I0320 08:52:59.908438 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 08:52:59.922958 master-0 kubenswrapper[27820]: I0320 08:52:59.922876 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 20 08:53:00.236770 master-0 kubenswrapper[27820]: I0320 08:53:00.236662 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-6tblf" Mar 20 08:53:00.278947 master-0 kubenswrapper[27820]: I0320 08:53:00.278871 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 08:53:00.445468 master-0 kubenswrapper[27820]: I0320 08:53:00.445351 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 08:53:00.544545 master-0 kubenswrapper[27820]: I0320 08:53:00.544402 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-t5n84" Mar 20 08:53:00.687066 master-0 kubenswrapper[27820]: I0320 08:53:00.686973 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 20 08:53:00.752148 master-0 kubenswrapper[27820]: I0320 08:53:00.752078 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 08:53:00.877860 master-0 kubenswrapper[27820]: I0320 08:53:00.877769 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 08:53:01.092554 master-0 kubenswrapper[27820]: I0320 08:53:01.092496 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 08:53:01.129228 master-0 kubenswrapper[27820]: I0320 08:53:01.129025 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 08:53:01.340761 master-0 kubenswrapper[27820]: I0320 08:53:01.340645 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 08:53:02.271019 master-0 kubenswrapper[27820]: I0320 08:53:02.270930 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 08:53:03.819813 master-0 kubenswrapper[27820]: I0320 08:53:03.819731 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 08:53:04.167142 master-0 kubenswrapper[27820]: I0320 08:53:04.167083 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 08:53:04.474920 master-0 kubenswrapper[27820]: I0320 08:53:04.474662 27820 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 20 08:53:04.475401 master-0 kubenswrapper[27820]: I0320 08:53:04.474813 27820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 20 08:53:04.475572 master-0 kubenswrapper[27820]: I0320 08:53:04.475442 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:53:04.476626 master-0 kubenswrapper[27820]: I0320 08:53:04.476544 27820 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"bf07c7b6cfd6872c0d4b0145eea2b021b2c13119eb8e210c245ed1f17613dab1"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 20 08:53:04.476964 master-0 kubenswrapper[27820]: I0320 08:53:04.476833 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" containerID="cri-o://bf07c7b6cfd6872c0d4b0145eea2b021b2c13119eb8e210c245ed1f17613dab1" gracePeriod=30 Mar 20 08:53:05.126149 master-0 kubenswrapper[27820]: I0320 08:53:05.126088 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 20 08:53:05.234813 master-0 kubenswrapper[27820]: I0320 08:53:05.234740 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 08:53:05.730919 master-0 kubenswrapper[27820]: I0320 08:53:05.730724 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 08:53:06.247970 master-0 kubenswrapper[27820]: I0320 08:53:06.247899 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 20 08:53:06.356891 master-0 kubenswrapper[27820]: I0320 08:53:06.356843 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-54fh7" Mar 20 08:53:06.564827 master-0 kubenswrapper[27820]: I0320 08:53:06.564650 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 08:53:07.015023 master-0 kubenswrapper[27820]: I0320 08:53:07.014951 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 08:53:07.376906 master-0 kubenswrapper[27820]: I0320 08:53:07.376863 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 08:53:07.905852 master-0 kubenswrapper[27820]: I0320 08:53:07.905783 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 20 08:53:07.930416 master-0 kubenswrapper[27820]: I0320 08:53:07.930385 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 08:53:08.198395 master-0 kubenswrapper[27820]: I0320 08:53:08.197174 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 20 08:53:09.342548 master-0 kubenswrapper[27820]: I0320 08:53:09.342504 27820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 20 08:53:09.343517 master-0 kubenswrapper[27820]: I0320 08:53:09.343488 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" containerID="cri-o://35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d" gracePeriod=5 Mar 20 08:53:09.423805 master-0 kubenswrapper[27820]: I0320 08:53:09.423718 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 08:53:09.439528 master-0 kubenswrapper[27820]: I0320 08:53:09.439500 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 08:53:09.725594 master-0 kubenswrapper[27820]: I0320 08:53:09.725458 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 20 08:53:09.907412 master-0 kubenswrapper[27820]: I0320 08:53:09.907355 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 20 08:53:10.658359 master-0 kubenswrapper[27820]: I0320 08:53:10.658304 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 08:53:10.862208 master-0 kubenswrapper[27820]: I0320 08:53:10.862163 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 08:53:11.053216 master-0 kubenswrapper[27820]: I0320 08:53:11.053106 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-hcrbd" Mar 20 08:53:11.143596 master-0 kubenswrapper[27820]: I0320 08:53:11.143516 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 20 08:53:11.460690 master-0 kubenswrapper[27820]: I0320 08:53:11.460636 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 08:53:11.483162 master-0 kubenswrapper[27820]: I0320 08:53:11.483106 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 08:53:11.534412 master-0 kubenswrapper[27820]: I0320 08:53:11.534348 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 08:53:12.108760 master-0 kubenswrapper[27820]: I0320 08:53:12.108716 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 08:53:12.822679 master-0 kubenswrapper[27820]: I0320 08:53:12.822617 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 08:53:12.887619 master-0 kubenswrapper[27820]: I0320 08:53:12.887554 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 20 08:53:13.012118 master-0 kubenswrapper[27820]: I0320 08:53:13.012061 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 08:53:13.242318 master-0 kubenswrapper[27820]: I0320 08:53:13.242249 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 08:53:13.383566 master-0 kubenswrapper[27820]: I0320 08:53:13.383485 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 20 08:53:13.651937 master-0 kubenswrapper[27820]: I0320 08:53:13.651878 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 08:53:13.850690 master-0 kubenswrapper[27820]: I0320 08:53:13.850621 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 08:53:13.937475 master-0 kubenswrapper[27820]: I0320 08:53:13.937343 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 08:53:14.904152 master-0 kubenswrapper[27820]: I0320 08:53:14.904090 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 20 08:53:14.904769 master-0 kubenswrapper[27820]: I0320 08:53:14.904169 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:53:15.099595 master-0 kubenswrapper[27820]: I0320 08:53:15.099539 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 20 08:53:15.099595 master-0 kubenswrapper[27820]: I0320 08:53:15.099587 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099638 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099681 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099712 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") pod \"85632c1cec8974aa874834e4cfff4c77\" (UID: \"85632c1cec8974aa874834e4cfff4c77\") " Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099769 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock" (OuterVolumeSpecName: "var-lock") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099787 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099844 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log" (OuterVolumeSpecName: "var-log") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:53:15.099867 master-0 kubenswrapper[27820]: I0320 08:53:15.099853 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests" (OuterVolumeSpecName: "manifests") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:53:15.100158 master-0 kubenswrapper[27820]: I0320 08:53:15.100142 27820 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-log\") on node \"master-0\" DevicePath \"\"" Mar 20 08:53:15.100215 master-0 kubenswrapper[27820]: I0320 08:53:15.100162 27820 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-manifests\") on node \"master-0\" DevicePath \"\"" Mar 20 08:53:15.100215 master-0 kubenswrapper[27820]: I0320 08:53:15.100173 27820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:53:15.100215 master-0 kubenswrapper[27820]: I0320 08:53:15.100188 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:53:15.105247 master-0 kubenswrapper[27820]: I0320 08:53:15.105199 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "85632c1cec8974aa874834e4cfff4c77" (UID: "85632c1cec8974aa874834e4cfff4c77"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:53:15.108848 master-0 kubenswrapper[27820]: I0320 08:53:15.108831 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_85632c1cec8974aa874834e4cfff4c77/startup-monitor/0.log" Mar 20 08:53:15.108925 master-0 kubenswrapper[27820]: I0320 08:53:15.108865 27820 generic.go:334] "Generic (PLEG): container finished" podID="85632c1cec8974aa874834e4cfff4c77" containerID="35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d" exitCode=137 Mar 20 08:53:15.108925 master-0 kubenswrapper[27820]: I0320 08:53:15.108905 27820 scope.go:117] "RemoveContainer" containerID="35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d" Mar 20 08:53:15.109012 master-0 kubenswrapper[27820]: I0320 08:53:15.108991 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 20 08:53:15.167094 master-0 kubenswrapper[27820]: I0320 08:53:15.166429 27820 scope.go:117] "RemoveContainer" containerID="35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d" Mar 20 08:53:15.167094 master-0 kubenswrapper[27820]: E0320 08:53:15.166867 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d\": container with ID starting with 35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d not found: ID does not exist" containerID="35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d" Mar 20 08:53:15.167094 master-0 kubenswrapper[27820]: I0320 08:53:15.166939 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d"} err="failed to get container status \"35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d\": rpc error: code = NotFound desc = could not find container \"35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d\": container with ID starting with 35795f43edc3459fb6a4d087c0ac4ae7696da7eaed52afbaa56c06e3eb073e8d not found: ID does not exist" Mar 20 08:53:15.201560 master-0 kubenswrapper[27820]: I0320 08:53:15.201490 27820 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/85632c1cec8974aa874834e4cfff4c77-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:53:15.425313 master-0 kubenswrapper[27820]: I0320 08:53:15.425116 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 08:53:15.587186 master-0 kubenswrapper[27820]: I0320 08:53:15.587057 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 08:53:16.089481 master-0 kubenswrapper[27820]: I0320 08:53:16.089397 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85632c1cec8974aa874834e4cfff4c77" path="/var/lib/kubelet/pods/85632c1cec8974aa874834e4cfff4c77/volumes" Mar 20 08:53:17.092657 master-0 kubenswrapper[27820]: I0320 08:53:17.092603 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-bcc6b" Mar 20 08:53:35.261588 master-0 kubenswrapper[27820]: I0320 08:53:35.261195 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager/1.log" Mar 20 08:53:35.263783 master-0 kubenswrapper[27820]: I0320 08:53:35.263736 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager/0.log" Mar 20 08:53:35.263872 master-0 kubenswrapper[27820]: I0320 08:53:35.263805 27820 generic.go:334] "Generic (PLEG): container finished" podID="36f4a012744c6465102d09cc67ac63e6" containerID="bf07c7b6cfd6872c0d4b0145eea2b021b2c13119eb8e210c245ed1f17613dab1" exitCode=137 Mar 20 08:53:35.263872 master-0 kubenswrapper[27820]: I0320 08:53:35.263849 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerDied","Data":"bf07c7b6cfd6872c0d4b0145eea2b021b2c13119eb8e210c245ed1f17613dab1"} Mar 20 08:53:35.263992 master-0 kubenswrapper[27820]: I0320 08:53:35.263899 27820 scope.go:117] "RemoveContainer" containerID="6290d119083ea809be21ab579813fa286464b690250eaa07fa0794bcdde38d59" Mar 20 08:53:36.279823 master-0 kubenswrapper[27820]: I0320 08:53:36.279742 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager/1.log" Mar 20 08:53:36.281172 master-0 kubenswrapper[27820]: I0320 08:53:36.281108 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"36f4a012744c6465102d09cc67ac63e6","Type":"ContainerStarted","Data":"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d"} Mar 20 08:53:44.473634 master-0 kubenswrapper[27820]: I0320 08:53:44.473466 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:53:44.474349 master-0 kubenswrapper[27820]: I0320 08:53:44.474239 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:53:44.479702 master-0 kubenswrapper[27820]: I0320 08:53:44.479624 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:53:45.351989 master-0 kubenswrapper[27820]: I0320 08:53:45.351926 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:53:57.561927 master-0 kubenswrapper[27820]: I0320 08:53:57.561838 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs"] Mar 20 08:53:57.562796 master-0 kubenswrapper[27820]: E0320 08:53:57.562412 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 20 08:53:57.562796 master-0 kubenswrapper[27820]: I0320 08:53:57.562431 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 20 08:53:57.562796 master-0 kubenswrapper[27820]: E0320 08:53:57.562468 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" containerName="installer" Mar 20 08:53:57.562796 master-0 kubenswrapper[27820]: I0320 08:53:57.562479 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" containerName="installer" Mar 20 08:53:57.562944 master-0 kubenswrapper[27820]: I0320 08:53:57.562809 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="78ae02f0-5d31-4fda-a63a-534f60df5d1f" containerName="installer" Mar 20 08:53:57.562944 master-0 kubenswrapper[27820]: I0320 08:53:57.562880 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="85632c1cec8974aa874834e4cfff4c77" containerName="startup-monitor" Mar 20 08:53:57.570496 master-0 kubenswrapper[27820]: I0320 08:53:57.570222 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.576353 master-0 kubenswrapper[27820]: I0320 08:53:57.575385 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-rpz95"] Mar 20 08:53:57.576969 master-0 kubenswrapper[27820]: I0320 08:53:57.576926 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.577230 master-0 kubenswrapper[27820]: I0320 08:53:57.577173 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 08:53:57.578178 master-0 kubenswrapper[27820]: I0320 08:53:57.578128 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-lg9p5" Mar 20 08:53:57.578309 master-0 kubenswrapper[27820]: I0320 08:53:57.578277 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 08:53:57.585554 master-0 kubenswrapper[27820]: I0320 08:53:57.585482 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 08:53:57.585827 master-0 kubenswrapper[27820]: I0320 08:53:57.585795 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 08:53:57.585961 master-0 kubenswrapper[27820]: I0320 08:53:57.585914 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 08:53:57.586255 master-0 kubenswrapper[27820]: I0320 08:53:57.586234 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-b8gm2" Mar 20 08:53:57.587925 master-0 kubenswrapper[27820]: I0320 08:53:57.587628 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 08:53:57.605180 master-0 kubenswrapper[27820]: I0320 08:53:57.605054 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-rpz95"] Mar 20 08:53:57.637292 master-0 kubenswrapper[27820]: I0320 08:53:57.635515 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 08:53:57.642381 master-0 kubenswrapper[27820]: I0320 08:53:57.640987 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs"] Mar 20 08:53:57.675286 master-0 kubenswrapper[27820]: I0320 08:53:57.674915 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-config\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.675286 master-0 kubenswrapper[27820]: I0320 08:53:57.674974 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4d6748da-9579-4d67-b91e-2cdd4b8fb296-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-dq4qs\" (UID: \"4d6748da-9579-4d67-b91e-2cdd4b8fb296\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.675286 master-0 kubenswrapper[27820]: I0320 08:53:57.675047 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-trusted-ca\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.675286 master-0 kubenswrapper[27820]: I0320 08:53:57.675066 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-serving-cert\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.675286 master-0 kubenswrapper[27820]: I0320 08:53:57.675084 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9g9b\" (UniqueName: \"kubernetes.io/projected/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-kube-api-access-x9g9b\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.675286 master-0 kubenswrapper[27820]: I0320 08:53:57.675128 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d6748da-9579-4d67-b91e-2cdd4b8fb296-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-dq4qs\" (UID: \"4d6748da-9579-4d67-b91e-2cdd4b8fb296\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.776730 master-0 kubenswrapper[27820]: I0320 08:53:57.776643 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-trusted-ca\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.776730 master-0 kubenswrapper[27820]: I0320 08:53:57.776721 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-serving-cert\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.776986 master-0 kubenswrapper[27820]: I0320 08:53:57.776901 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9g9b\" (UniqueName: \"kubernetes.io/projected/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-kube-api-access-x9g9b\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.776986 master-0 kubenswrapper[27820]: I0320 08:53:57.776978 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d6748da-9579-4d67-b91e-2cdd4b8fb296-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-dq4qs\" (UID: \"4d6748da-9579-4d67-b91e-2cdd4b8fb296\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.777050 master-0 kubenswrapper[27820]: I0320 08:53:57.777013 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-config\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.777050 master-0 kubenswrapper[27820]: I0320 08:53:57.777036 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4d6748da-9579-4d67-b91e-2cdd4b8fb296-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-dq4qs\" (UID: \"4d6748da-9579-4d67-b91e-2cdd4b8fb296\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.778130 master-0 kubenswrapper[27820]: I0320 08:53:57.778092 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-trusted-ca\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.778309 master-0 kubenswrapper[27820]: I0320 08:53:57.778253 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-config\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.778466 master-0 kubenswrapper[27820]: I0320 08:53:57.778441 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/4d6748da-9579-4d67-b91e-2cdd4b8fb296-nginx-conf\") pod \"networking-console-plugin-7c6b76c555-dq4qs\" (UID: \"4d6748da-9579-4d67-b91e-2cdd4b8fb296\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.781446 master-0 kubenswrapper[27820]: I0320 08:53:57.781391 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/4d6748da-9579-4d67-b91e-2cdd4b8fb296-networking-console-plugin-cert\") pod \"networking-console-plugin-7c6b76c555-dq4qs\" (UID: \"4d6748da-9579-4d67-b91e-2cdd4b8fb296\") " pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.783143 master-0 kubenswrapper[27820]: I0320 08:53:57.783108 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-serving-cert\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.813065 master-0 kubenswrapper[27820]: I0320 08:53:57.812941 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9g9b\" (UniqueName: \"kubernetes.io/projected/c1e02d0c-443f-4923-b3dd-a4f3f88d9a05-kube-api-access-x9g9b\") pod \"console-operator-76b6568d85-rpz95\" (UID: \"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05\") " pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:57.938413 master-0 kubenswrapper[27820]: I0320 08:53:57.938287 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" Mar 20 08:53:57.963980 master-0 kubenswrapper[27820]: I0320 08:53:57.963918 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:53:58.382784 master-0 kubenswrapper[27820]: I0320 08:53:58.382739 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs"] Mar 20 08:53:58.387170 master-0 kubenswrapper[27820]: W0320 08:53:58.387108 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d6748da_9579_4d67_b91e_2cdd4b8fb296.slice/crio-ac98ab8b22f67a6756c661a4e5fcade4961f3dfd3d475f3a1befc86ae536da60 WatchSource:0}: Error finding container ac98ab8b22f67a6756c661a4e5fcade4961f3dfd3d475f3a1befc86ae536da60: Status 404 returned error can't find the container with id ac98ab8b22f67a6756c661a4e5fcade4961f3dfd3d475f3a1befc86ae536da60 Mar 20 08:53:58.442415 master-0 kubenswrapper[27820]: I0320 08:53:58.442258 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" event={"ID":"4d6748da-9579-4d67-b91e-2cdd4b8fb296","Type":"ContainerStarted","Data":"ac98ab8b22f67a6756c661a4e5fcade4961f3dfd3d475f3a1befc86ae536da60"} Mar 20 08:53:58.454575 master-0 kubenswrapper[27820]: I0320 08:53:58.454516 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-76b6568d85-rpz95"] Mar 20 08:53:58.459050 master-0 kubenswrapper[27820]: W0320 08:53:58.458960 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e02d0c_443f_4923_b3dd_a4f3f88d9a05.slice/crio-c0e9d75e516a489df288a0a079567acac10ae2f60f4a824f3109d44e97a30b3f WatchSource:0}: Error finding container c0e9d75e516a489df288a0a079567acac10ae2f60f4a824f3109d44e97a30b3f: Status 404 returned error can't find the container with id c0e9d75e516a489df288a0a079567acac10ae2f60f4a824f3109d44e97a30b3f Mar 20 08:53:59.450312 master-0 kubenswrapper[27820]: I0320 08:53:59.450227 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" event={"ID":"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05","Type":"ContainerStarted","Data":"c0e9d75e516a489df288a0a079567acac10ae2f60f4a824f3109d44e97a30b3f"} Mar 20 08:54:01.471208 master-0 kubenswrapper[27820]: I0320 08:54:01.471123 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" event={"ID":"4d6748da-9579-4d67-b91e-2cdd4b8fb296","Type":"ContainerStarted","Data":"5540c583820188cdfbb073a246e51170377d259b57954ef806564c851396be0b"} Mar 20 08:54:01.506125 master-0 kubenswrapper[27820]: I0320 08:54:01.506015 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-7c6b76c555-dq4qs" podStartSLOduration=2.486402467 podStartE2EDuration="4.505989853s" podCreationTimestamp="2026-03-20 08:53:57 +0000 UTC" firstStartedPulling="2026-03-20 08:53:58.390034592 +0000 UTC m=+248.485243736" lastFinishedPulling="2026-03-20 08:54:00.409621978 +0000 UTC m=+250.504831122" observedRunningTime="2026-03-20 08:54:01.499656851 +0000 UTC m=+251.594866045" watchObservedRunningTime="2026-03-20 08:54:01.505989853 +0000 UTC m=+251.601199037" Mar 20 08:54:02.223343 master-0 kubenswrapper[27820]: I0320 08:54:02.223261 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-66b8ffb895-8rz5f"] Mar 20 08:54:02.224376 master-0 kubenswrapper[27820]: I0320 08:54:02.224348 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:02.231447 master-0 kubenswrapper[27820]: I0320 08:54:02.231403 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 08:54:02.231902 master-0 kubenswrapper[27820]: I0320 08:54:02.231869 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 08:54:02.234351 master-0 kubenswrapper[27820]: I0320 08:54:02.234321 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-6qwtp" Mar 20 08:54:02.242415 master-0 kubenswrapper[27820]: I0320 08:54:02.242360 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-8rz5f"] Mar 20 08:54:02.365601 master-0 kubenswrapper[27820]: I0320 08:54:02.365537 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpqns\" (UniqueName: \"kubernetes.io/projected/66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9-kube-api-access-kpqns\") pod \"downloads-66b8ffb895-8rz5f\" (UID: \"66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9\") " pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:02.466811 master-0 kubenswrapper[27820]: I0320 08:54:02.466751 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpqns\" (UniqueName: \"kubernetes.io/projected/66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9-kube-api-access-kpqns\") pod \"downloads-66b8ffb895-8rz5f\" (UID: \"66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9\") " pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:02.480477 master-0 kubenswrapper[27820]: I0320 08:54:02.480327 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" event={"ID":"c1e02d0c-443f-4923-b3dd-a4f3f88d9a05","Type":"ContainerStarted","Data":"5ff51b94c6941e2963db9a01cee3128315a94d9ff9760beb76d67b913c472a36"} Mar 20 08:54:02.481149 master-0 kubenswrapper[27820]: I0320 08:54:02.480728 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:54:02.489063 master-0 kubenswrapper[27820]: I0320 08:54:02.489025 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpqns\" (UniqueName: \"kubernetes.io/projected/66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9-kube-api-access-kpqns\") pod \"downloads-66b8ffb895-8rz5f\" (UID: \"66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9\") " pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:02.491105 master-0 kubenswrapper[27820]: I0320 08:54:02.491072 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" Mar 20 08:54:02.517419 master-0 kubenswrapper[27820]: I0320 08:54:02.517222 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-76b6568d85-rpz95" podStartSLOduration=2.676107274 podStartE2EDuration="5.517196069s" podCreationTimestamp="2026-03-20 08:53:57 +0000 UTC" firstStartedPulling="2026-03-20 08:53:58.462716392 +0000 UTC m=+248.557925546" lastFinishedPulling="2026-03-20 08:54:01.303805197 +0000 UTC m=+251.399014341" observedRunningTime="2026-03-20 08:54:02.513365405 +0000 UTC m=+252.608574589" watchObservedRunningTime="2026-03-20 08:54:02.517196069 +0000 UTC m=+252.612405253" Mar 20 08:54:02.540835 master-0 kubenswrapper[27820]: I0320 08:54:02.540772 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:03.009565 master-0 kubenswrapper[27820]: I0320 08:54:03.009488 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-66b8ffb895-8rz5f"] Mar 20 08:54:03.017812 master-0 kubenswrapper[27820]: W0320 08:54:03.017742 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66c7dc3c_174d_4f87_8c7b_c5c7b8649fb9.slice/crio-39507e24fadbd324295378cc9b69652f801677e4984f64ecf9091adc88bf7622 WatchSource:0}: Error finding container 39507e24fadbd324295378cc9b69652f801677e4984f64ecf9091adc88bf7622: Status 404 returned error can't find the container with id 39507e24fadbd324295378cc9b69652f801677e4984f64ecf9091adc88bf7622 Mar 20 08:54:03.492138 master-0 kubenswrapper[27820]: I0320 08:54:03.492078 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-8rz5f" event={"ID":"66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9","Type":"ContainerStarted","Data":"39507e24fadbd324295378cc9b69652f801677e4984f64ecf9091adc88bf7622"} Mar 20 08:54:05.309958 master-0 kubenswrapper[27820]: I0320 08:54:05.309899 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-75bd5bfbf6-b7f22"] Mar 20 08:54:05.310874 master-0 kubenswrapper[27820]: I0320 08:54:05.310832 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.313709 master-0 kubenswrapper[27820]: I0320 08:54:05.313303 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 08:54:05.313709 master-0 kubenswrapper[27820]: I0320 08:54:05.313519 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-gtbn9" Mar 20 08:54:05.318487 master-0 kubenswrapper[27820]: I0320 08:54:05.317644 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 08:54:05.318487 master-0 kubenswrapper[27820]: I0320 08:54:05.317908 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 08:54:05.318487 master-0 kubenswrapper[27820]: I0320 08:54:05.318032 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 08:54:05.318487 master-0 kubenswrapper[27820]: I0320 08:54:05.318133 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 08:54:05.339987 master-0 kubenswrapper[27820]: I0320 08:54:05.339924 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75bd5bfbf6-b7f22"] Mar 20 08:54:05.421295 master-0 kubenswrapper[27820]: I0320 08:54:05.414429 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-config\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.421295 master-0 kubenswrapper[27820]: I0320 08:54:05.414566 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9xc2\" (UniqueName: \"kubernetes.io/projected/aae65814-71b3-40b5-be46-7bf04aa6aa58-kube-api-access-z9xc2\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.421295 master-0 kubenswrapper[27820]: I0320 08:54:05.414616 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-serving-cert\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.421295 master-0 kubenswrapper[27820]: I0320 08:54:05.414641 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-service-ca\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.421295 master-0 kubenswrapper[27820]: I0320 08:54:05.414684 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-oauth-config\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.421295 master-0 kubenswrapper[27820]: I0320 08:54:05.414716 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-oauth-serving-cert\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.515984 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-config\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.516062 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9xc2\" (UniqueName: \"kubernetes.io/projected/aae65814-71b3-40b5-be46-7bf04aa6aa58-kube-api-access-z9xc2\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.516099 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-serving-cert\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.516118 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-service-ca\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.516154 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-oauth-config\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.516297 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-oauth-serving-cert\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.516971 master-0 kubenswrapper[27820]: I0320 08:54:05.516931 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-service-ca\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.517371 master-0 kubenswrapper[27820]: I0320 08:54:05.517040 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-oauth-serving-cert\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.517517 master-0 kubenswrapper[27820]: I0320 08:54:05.517458 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-config\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.527873 master-0 kubenswrapper[27820]: I0320 08:54:05.527831 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-oauth-config\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.528403 master-0 kubenswrapper[27820]: I0320 08:54:05.528374 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-serving-cert\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.532058 master-0 kubenswrapper[27820]: I0320 08:54:05.532027 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9xc2\" (UniqueName: \"kubernetes.io/projected/aae65814-71b3-40b5-be46-7bf04aa6aa58-kube-api-access-z9xc2\") pod \"console-75bd5bfbf6-b7f22\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:05.645659 master-0 kubenswrapper[27820]: I0320 08:54:05.645603 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:06.055543 master-0 kubenswrapper[27820]: I0320 08:54:06.055492 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-75bd5bfbf6-b7f22"] Mar 20 08:54:06.056007 master-0 kubenswrapper[27820]: W0320 08:54:06.055923 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaae65814_71b3_40b5_be46_7bf04aa6aa58.slice/crio-ea7de601f0bd3b1a5193522752a7f4584e9dac65217e6054f00656c26e9b79e2 WatchSource:0}: Error finding container ea7de601f0bd3b1a5193522752a7f4584e9dac65217e6054f00656c26e9b79e2: Status 404 returned error can't find the container with id ea7de601f0bd3b1a5193522752a7f4584e9dac65217e6054f00656c26e9b79e2 Mar 20 08:54:06.521883 master-0 kubenswrapper[27820]: I0320 08:54:06.521790 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bd5bfbf6-b7f22" event={"ID":"aae65814-71b3-40b5-be46-7bf04aa6aa58","Type":"ContainerStarted","Data":"ea7de601f0bd3b1a5193522752a7f4584e9dac65217e6054f00656c26e9b79e2"} Mar 20 08:54:08.494656 master-0 kubenswrapper[27820]: I0320 08:54:08.494590 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:54:08.497021 master-0 kubenswrapper[27820]: I0320 08:54:08.496993 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.503972 master-0 kubenswrapper[27820]: I0320 08:54:08.503701 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 20 08:54:08.503972 master-0 kubenswrapper[27820]: I0320 08:54:08.503774 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 20 08:54:08.503972 master-0 kubenswrapper[27820]: I0320 08:54:08.503924 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 20 08:54:08.503972 master-0 kubenswrapper[27820]: I0320 08:54:08.503983 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 20 08:54:08.504444 master-0 kubenswrapper[27820]: I0320 08:54:08.504137 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 20 08:54:08.504444 master-0 kubenswrapper[27820]: I0320 08:54:08.504180 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 20 08:54:08.508905 master-0 kubenswrapper[27820]: I0320 08:54:08.508829 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 20 08:54:08.521042 master-0 kubenswrapper[27820]: I0320 08:54:08.516325 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:54:08.521042 master-0 kubenswrapper[27820]: I0320 08:54:08.517233 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 20 08:54:08.591506 master-0 kubenswrapper[27820]: I0320 08:54:08.591443 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591506 master-0 kubenswrapper[27820]: I0320 08:54:08.591511 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-out\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591602 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591629 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zcn7\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-kube-api-access-4zcn7\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591656 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591675 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591735 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591760 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591781 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.591795 master-0 kubenswrapper[27820]: I0320 08:54:08.591798 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-web-config\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.592187 master-0 kubenswrapper[27820]: I0320 08:54:08.591822 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.592187 master-0 kubenswrapper[27820]: I0320 08:54:08.591840 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-volume\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.692979 master-0 kubenswrapper[27820]: I0320 08:54:08.692894 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.693221 master-0 kubenswrapper[27820]: I0320 08:54:08.693135 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.693221 master-0 kubenswrapper[27820]: I0320 08:54:08.693168 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.693405 master-0 kubenswrapper[27820]: I0320 08:54:08.693381 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.693479 master-0 kubenswrapper[27820]: I0320 08:54:08.693411 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-web-config\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.693615 master-0 kubenswrapper[27820]: I0320 08:54:08.693584 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.693743 master-0 kubenswrapper[27820]: I0320 08:54:08.693719 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-volume\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.694330 master-0 kubenswrapper[27820]: I0320 08:54:08.694311 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.694509 master-0 kubenswrapper[27820]: I0320 08:54:08.694490 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-out\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.694748 master-0 kubenswrapper[27820]: I0320 08:54:08.694693 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.694924 master-0 kubenswrapper[27820]: I0320 08:54:08.694904 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.695103 master-0 kubenswrapper[27820]: I0320 08:54:08.695083 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zcn7\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-kube-api-access-4zcn7\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.695326 master-0 kubenswrapper[27820]: I0320 08:54:08.695307 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.696516 master-0 kubenswrapper[27820]: I0320 08:54:08.695485 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.696516 master-0 kubenswrapper[27820]: I0320 08:54:08.695739 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.696516 master-0 kubenswrapper[27820]: I0320 08:54:08.696471 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.697458 master-0 kubenswrapper[27820]: I0320 08:54:08.697436 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-web-config\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.700042 master-0 kubenswrapper[27820]: I0320 08:54:08.699916 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-out\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.700143 master-0 kubenswrapper[27820]: I0320 08:54:08.700115 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-tls-assets\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.702162 master-0 kubenswrapper[27820]: I0320 08:54:08.702052 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.703208 master-0 kubenswrapper[27820]: I0320 08:54:08.703140 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-volume\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.706017 master-0 kubenswrapper[27820]: I0320 08:54:08.705980 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.712998 master-0 kubenswrapper[27820]: I0320 08:54:08.712947 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zcn7\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-kube-api-access-4zcn7\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.717080 master-0 kubenswrapper[27820]: I0320 08:54:08.717043 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:08.857449 master-0 kubenswrapper[27820]: I0320 08:54:08.857378 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:54:09.459844 master-0 kubenswrapper[27820]: I0320 08:54:09.459787 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-7b58769b45-q7j7f"] Mar 20 08:54:09.465236 master-0 kubenswrapper[27820]: I0320 08:54:09.465182 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.470723 master-0 kubenswrapper[27820]: I0320 08:54:09.470664 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 20 08:54:09.471006 master-0 kubenswrapper[27820]: I0320 08:54:09.470819 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 20 08:54:09.471006 master-0 kubenswrapper[27820]: I0320 08:54:09.470949 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7vf9838hpqbfe" Mar 20 08:54:09.471499 master-0 kubenswrapper[27820]: I0320 08:54:09.471109 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 20 08:54:09.471499 master-0 kubenswrapper[27820]: I0320 08:54:09.471309 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 20 08:54:09.471499 master-0 kubenswrapper[27820]: I0320 08:54:09.471421 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 20 08:54:09.488755 master-0 kubenswrapper[27820]: I0320 08:54:09.488663 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7b58769b45-q7j7f"] Mar 20 08:54:09.612781 master-0 kubenswrapper[27820]: I0320 08:54:09.612696 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-grpc-tls\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.612767 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.612933 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.613113 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.613162 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-tls\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.613416 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-metrics-client-ca\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.613477 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjwb9\" (UniqueName: \"kubernetes.io/projected/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-kube-api-access-pjwb9\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.613781 master-0 kubenswrapper[27820]: I0320 08:54:09.613599 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.714806 master-0 kubenswrapper[27820]: I0320 08:54:09.714668 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.714806 master-0 kubenswrapper[27820]: I0320 08:54:09.714755 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-grpc-tls\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.714806 master-0 kubenswrapper[27820]: I0320 08:54:09.714805 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.715079 master-0 kubenswrapper[27820]: I0320 08:54:09.714834 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.715555 master-0 kubenswrapper[27820]: I0320 08:54:09.715047 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.715673 master-0 kubenswrapper[27820]: I0320 08:54:09.715594 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-tls\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.715838 master-0 kubenswrapper[27820]: I0320 08:54:09.715693 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-metrics-client-ca\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.715838 master-0 kubenswrapper[27820]: I0320 08:54:09.715745 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjwb9\" (UniqueName: \"kubernetes.io/projected/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-kube-api-access-pjwb9\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.716744 master-0 kubenswrapper[27820]: I0320 08:54:09.716693 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-metrics-client-ca\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.718628 master-0 kubenswrapper[27820]: I0320 08:54:09.718592 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.719697 master-0 kubenswrapper[27820]: I0320 08:54:09.719315 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-grpc-tls\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.719697 master-0 kubenswrapper[27820]: I0320 08:54:09.719431 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.719697 master-0 kubenswrapper[27820]: I0320 08:54:09.719651 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.722456 master-0 kubenswrapper[27820]: I0320 08:54:09.722423 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-tls\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.747069 master-0 kubenswrapper[27820]: I0320 08:54:09.746996 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjwb9\" (UniqueName: \"kubernetes.io/projected/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-kube-api-access-pjwb9\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.768283 master-0 kubenswrapper[27820]: I0320 08:54:09.762902 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/06e28acf-6ec7-4e0d-bb87-6577b30f7c35-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-7b58769b45-q7j7f\" (UID: \"06e28acf-6ec7-4e0d-bb87-6577b30f7c35\") " pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:09.800585 master-0 kubenswrapper[27820]: I0320 08:54:09.800504 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:10.336425 master-0 kubenswrapper[27820]: I0320 08:54:10.336368 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-6f98bb7c67-q7pqm"] Mar 20 08:54:10.337979 master-0 kubenswrapper[27820]: I0320 08:54:10.337182 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.347826 master-0 kubenswrapper[27820]: I0320 08:54:10.347764 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f98bb7c67-q7pqm"] Mar 20 08:54:10.348047 master-0 kubenswrapper[27820]: I0320 08:54:10.348006 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433082 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-oauth-config\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433137 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-trusted-ca-bundle\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433172 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-serving-cert\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433202 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-oauth-serving-cert\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433249 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbq56\" (UniqueName: \"kubernetes.io/projected/7fae9393-1ca8-4304-92ff-78f8f2d85288-kube-api-access-bbq56\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433311 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-config\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.434069 master-0 kubenswrapper[27820]: I0320 08:54:10.433331 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-service-ca\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.535772 master-0 kubenswrapper[27820]: I0320 08:54:10.535158 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-oauth-config\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.535772 master-0 kubenswrapper[27820]: I0320 08:54:10.535716 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-trusted-ca-bundle\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.536096 master-0 kubenswrapper[27820]: I0320 08:54:10.535812 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-serving-cert\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.536096 master-0 kubenswrapper[27820]: I0320 08:54:10.535871 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-oauth-serving-cert\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.536096 master-0 kubenswrapper[27820]: I0320 08:54:10.536022 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbq56\" (UniqueName: \"kubernetes.io/projected/7fae9393-1ca8-4304-92ff-78f8f2d85288-kube-api-access-bbq56\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.536217 master-0 kubenswrapper[27820]: I0320 08:54:10.536136 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-config\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.536217 master-0 kubenswrapper[27820]: I0320 08:54:10.536176 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-service-ca\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.537524 master-0 kubenswrapper[27820]: I0320 08:54:10.537463 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-oauth-serving-cert\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.537524 master-0 kubenswrapper[27820]: I0320 08:54:10.537474 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-service-ca\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.538345 master-0 kubenswrapper[27820]: I0320 08:54:10.538312 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-config\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.538414 master-0 kubenswrapper[27820]: I0320 08:54:10.538344 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-trusted-ca-bundle\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.540525 master-0 kubenswrapper[27820]: I0320 08:54:10.540466 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-oauth-config\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.557410 master-0 kubenswrapper[27820]: I0320 08:54:10.556077 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-serving-cert\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.557410 master-0 kubenswrapper[27820]: I0320 08:54:10.557337 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbq56\" (UniqueName: \"kubernetes.io/projected/7fae9393-1ca8-4304-92ff-78f8f2d85288-kube-api-access-bbq56\") pod \"console-6f98bb7c67-q7pqm\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:10.667320 master-0 kubenswrapper[27820]: I0320 08:54:10.667195 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:11.136473 master-0 kubenswrapper[27820]: I0320 08:54:11.136416 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:54:11.145133 master-0 kubenswrapper[27820]: W0320 08:54:11.145078 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8e6b860_8317_4fbe_9f43_41b0b707bc1b.slice/crio-a51a6c6d6ea71c0cdf6b651033a8f4d9d4010cc213330dd350b3500fa8d0e302 WatchSource:0}: Error finding container a51a6c6d6ea71c0cdf6b651033a8f4d9d4010cc213330dd350b3500fa8d0e302: Status 404 returned error can't find the container with id a51a6c6d6ea71c0cdf6b651033a8f4d9d4010cc213330dd350b3500fa8d0e302 Mar 20 08:54:11.274402 master-0 kubenswrapper[27820]: I0320 08:54:11.272133 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-6f98bb7c67-q7pqm"] Mar 20 08:54:11.274402 master-0 kubenswrapper[27820]: W0320 08:54:11.273454 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fae9393_1ca8_4304_92ff_78f8f2d85288.slice/crio-b0d0773eac6ad9fca11ecf90a37a5f0d38779f748fb79fd21a61225fd61b7e3d WatchSource:0}: Error finding container b0d0773eac6ad9fca11ecf90a37a5f0d38779f748fb79fd21a61225fd61b7e3d: Status 404 returned error can't find the container with id b0d0773eac6ad9fca11ecf90a37a5f0d38779f748fb79fd21a61225fd61b7e3d Mar 20 08:54:11.292861 master-0 kubenswrapper[27820]: I0320 08:54:11.292822 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-7b58769b45-q7j7f"] Mar 20 08:54:11.295587 master-0 kubenswrapper[27820]: W0320 08:54:11.295554 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod06e28acf_6ec7_4e0d_bb87_6577b30f7c35.slice/crio-52c9ce3bc0950d6d415a11c9ddecc0d48093389a8572468e63ac9f2597756860 WatchSource:0}: Error finding container 52c9ce3bc0950d6d415a11c9ddecc0d48093389a8572468e63ac9f2597756860: Status 404 returned error can't find the container with id 52c9ce3bc0950d6d415a11c9ddecc0d48093389a8572468e63ac9f2597756860 Mar 20 08:54:11.561651 master-0 kubenswrapper[27820]: I0320 08:54:11.561526 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bd5bfbf6-b7f22" event={"ID":"aae65814-71b3-40b5-be46-7bf04aa6aa58","Type":"ContainerStarted","Data":"5a6b6e9ed5aad333e6c7fc2a317674bca57e1c285454b138600bc8a3fddef6e8"} Mar 20 08:54:11.562721 master-0 kubenswrapper[27820]: I0320 08:54:11.562671 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"52c9ce3bc0950d6d415a11c9ddecc0d48093389a8572468e63ac9f2597756860"} Mar 20 08:54:11.565453 master-0 kubenswrapper[27820]: I0320 08:54:11.565424 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f98bb7c67-q7pqm" event={"ID":"7fae9393-1ca8-4304-92ff-78f8f2d85288","Type":"ContainerStarted","Data":"380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173"} Mar 20 08:54:11.565536 master-0 kubenswrapper[27820]: I0320 08:54:11.565456 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f98bb7c67-q7pqm" event={"ID":"7fae9393-1ca8-4304-92ff-78f8f2d85288","Type":"ContainerStarted","Data":"b0d0773eac6ad9fca11ecf90a37a5f0d38779f748fb79fd21a61225fd61b7e3d"} Mar 20 08:54:11.566831 master-0 kubenswrapper[27820]: I0320 08:54:11.566792 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"a51a6c6d6ea71c0cdf6b651033a8f4d9d4010cc213330dd350b3500fa8d0e302"} Mar 20 08:54:11.583779 master-0 kubenswrapper[27820]: I0320 08:54:11.582978 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-75bd5bfbf6-b7f22" podStartSLOduration=1.889407278 podStartE2EDuration="6.582961769s" podCreationTimestamp="2026-03-20 08:54:05 +0000 UTC" firstStartedPulling="2026-03-20 08:54:06.05804095 +0000 UTC m=+256.153250104" lastFinishedPulling="2026-03-20 08:54:10.751595451 +0000 UTC m=+260.846804595" observedRunningTime="2026-03-20 08:54:11.579437143 +0000 UTC m=+261.674646317" watchObservedRunningTime="2026-03-20 08:54:11.582961769 +0000 UTC m=+261.678170923" Mar 20 08:54:11.600124 master-0 kubenswrapper[27820]: I0320 08:54:11.600030 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-6f98bb7c67-q7pqm" podStartSLOduration=1.6000088030000001 podStartE2EDuration="1.600008803s" podCreationTimestamp="2026-03-20 08:54:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:11.596665182 +0000 UTC m=+261.691874346" watchObservedRunningTime="2026-03-20 08:54:11.600008803 +0000 UTC m=+261.695217947" Mar 20 08:54:12.063287 master-0 kubenswrapper[27820]: I0320 08:54:12.059601 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn"] Mar 20 08:54:12.063287 master-0 kubenswrapper[27820]: I0320 08:54:12.060604 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.071981 master-0 kubenswrapper[27820]: I0320 08:54:12.065896 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-9h4r8c2meiet3" Mar 20 08:54:12.071981 master-0 kubenswrapper[27820]: I0320 08:54:12.069807 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-55d84d7794-56n4c"] Mar 20 08:54:12.071981 master-0 kubenswrapper[27820]: I0320 08:54:12.070509 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" podUID="04466971-127b-403e-af45-dad97b6e0c87" containerName="metrics-server" containerID="cri-o://b842607819c12e2c961d4115971433a287618e424b9b5e836fdeed85d90e9244" gracePeriod=170 Mar 20 08:54:12.088207 master-0 kubenswrapper[27820]: I0320 08:54:12.088141 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn"] Mar 20 08:54:12.172347 master-0 kubenswrapper[27820]: I0320 08:54:12.172284 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-secret-metrics-server-tls\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.172585 master-0 kubenswrapper[27820]: I0320 08:54:12.172400 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-secret-metrics-client-certs\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.172585 master-0 kubenswrapper[27820]: I0320 08:54:12.172526 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-client-ca-bundle\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.172585 master-0 kubenswrapper[27820]: I0320 08:54:12.172554 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/83bf09c7-7ad7-4f3c-8e55-b154af674183-metrics-server-audit-profiles\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.172683 master-0 kubenswrapper[27820]: I0320 08:54:12.172589 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh6s\" (UniqueName: \"kubernetes.io/projected/83bf09c7-7ad7-4f3c-8e55-b154af674183-kube-api-access-xlh6s\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.172683 master-0 kubenswrapper[27820]: I0320 08:54:12.172619 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83bf09c7-7ad7-4f3c-8e55-b154af674183-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.172683 master-0 kubenswrapper[27820]: I0320 08:54:12.172675 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/83bf09c7-7ad7-4f3c-8e55-b154af674183-audit-log\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.255257 master-0 kubenswrapper[27820]: I0320 08:54:12.255080 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-69449d79f9-kr2pv"] Mar 20 08:54:12.260295 master-0 kubenswrapper[27820]: I0320 08:54:12.257185 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.260295 master-0 kubenswrapper[27820]: I0320 08:54:12.259065 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 20 08:54:12.260295 master-0 kubenswrapper[27820]: I0320 08:54:12.259227 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 20 08:54:12.260295 master-0 kubenswrapper[27820]: I0320 08:54:12.259305 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 20 08:54:12.260295 master-0 kubenswrapper[27820]: I0320 08:54:12.259335 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 20 08:54:12.260295 master-0 kubenswrapper[27820]: I0320 08:54:12.259466 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 20 08:54:12.267748 master-0 kubenswrapper[27820]: I0320 08:54:12.267711 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 20 08:54:12.271544 master-0 kubenswrapper[27820]: I0320 08:54:12.271512 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-69449d79f9-kr2pv"] Mar 20 08:54:12.274476 master-0 kubenswrapper[27820]: I0320 08:54:12.274445 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/83bf09c7-7ad7-4f3c-8e55-b154af674183-audit-log\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.274539 master-0 kubenswrapper[27820]: I0320 08:54:12.274492 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-secret-metrics-server-tls\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.274539 master-0 kubenswrapper[27820]: I0320 08:54:12.274532 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-secret-metrics-client-certs\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.274642 master-0 kubenswrapper[27820]: I0320 08:54:12.274621 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/83bf09c7-7ad7-4f3c-8e55-b154af674183-metrics-server-audit-profiles\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.274694 master-0 kubenswrapper[27820]: I0320 08:54:12.274648 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-client-ca-bundle\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.274694 master-0 kubenswrapper[27820]: I0320 08:54:12.274683 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlh6s\" (UniqueName: \"kubernetes.io/projected/83bf09c7-7ad7-4f3c-8e55-b154af674183-kube-api-access-xlh6s\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.274758 master-0 kubenswrapper[27820]: I0320 08:54:12.274712 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83bf09c7-7ad7-4f3c-8e55-b154af674183-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.275731 master-0 kubenswrapper[27820]: I0320 08:54:12.275705 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83bf09c7-7ad7-4f3c-8e55-b154af674183-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.276068 master-0 kubenswrapper[27820]: I0320 08:54:12.276048 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/83bf09c7-7ad7-4f3c-8e55-b154af674183-audit-log\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.278964 master-0 kubenswrapper[27820]: I0320 08:54:12.278937 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-secret-metrics-server-tls\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.284101 master-0 kubenswrapper[27820]: I0320 08:54:12.284076 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-secret-metrics-client-certs\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.285380 master-0 kubenswrapper[27820]: I0320 08:54:12.285354 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/83bf09c7-7ad7-4f3c-8e55-b154af674183-metrics-server-audit-profiles\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.287902 master-0 kubenswrapper[27820]: I0320 08:54:12.287876 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bf09c7-7ad7-4f3c-8e55-b154af674183-client-ca-bundle\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.331393 master-0 kubenswrapper[27820]: I0320 08:54:12.331179 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlh6s\" (UniqueName: \"kubernetes.io/projected/83bf09c7-7ad7-4f3c-8e55-b154af674183-kube-api-access-xlh6s\") pod \"metrics-server-64c7dbd4b9-vtfnn\" (UID: \"83bf09c7-7ad7-4f3c-8e55-b154af674183\") " pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.376537 master-0 kubenswrapper[27820]: I0320 08:54:12.376448 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99bw\" (UniqueName: \"kubernetes.io/projected/7f15ea03-44e8-40ae-959d-9cca287d76c9-kube-api-access-v99bw\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376537 master-0 kubenswrapper[27820]: I0320 08:54:12.376531 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376710 master-0 kubenswrapper[27820]: I0320 08:54:12.376584 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-metrics-client-ca\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376710 master-0 kubenswrapper[27820]: I0320 08:54:12.376676 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-federate-client-tls\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376710 master-0 kubenswrapper[27820]: I0320 08:54:12.376703 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-telemeter-client-tls\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376801 master-0 kubenswrapper[27820]: I0320 08:54:12.376720 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376801 master-0 kubenswrapper[27820]: I0320 08:54:12.376740 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-serving-certs-ca-bundle\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.376801 master-0 kubenswrapper[27820]: I0320 08:54:12.376772 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-secret-telemeter-client\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.400328 master-0 kubenswrapper[27820]: I0320 08:54:12.399860 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:12.478244 master-0 kubenswrapper[27820]: I0320 08:54:12.478143 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99bw\" (UniqueName: \"kubernetes.io/projected/7f15ea03-44e8-40ae-959d-9cca287d76c9-kube-api-access-v99bw\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.478622 master-0 kubenswrapper[27820]: I0320 08:54:12.478560 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.478788 master-0 kubenswrapper[27820]: I0320 08:54:12.478723 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-metrics-client-ca\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479030 master-0 kubenswrapper[27820]: I0320 08:54:12.478957 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-federate-client-tls\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479030 master-0 kubenswrapper[27820]: I0320 08:54:12.478987 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-telemeter-client-tls\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479604 master-0 kubenswrapper[27820]: I0320 08:54:12.479296 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479604 master-0 kubenswrapper[27820]: I0320 08:54:12.479399 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-serving-certs-ca-bundle\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479604 master-0 kubenswrapper[27820]: I0320 08:54:12.479457 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-secret-telemeter-client\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479604 master-0 kubenswrapper[27820]: I0320 08:54:12.479557 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-telemeter-trusted-ca-bundle\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.479827 master-0 kubenswrapper[27820]: I0320 08:54:12.479754 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-metrics-client-ca\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.480402 master-0 kubenswrapper[27820]: I0320 08:54:12.480371 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7f15ea03-44e8-40ae-959d-9cca287d76c9-serving-certs-ca-bundle\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.483736 master-0 kubenswrapper[27820]: I0320 08:54:12.483331 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-secret-telemeter-client\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.483736 master-0 kubenswrapper[27820]: I0320 08:54:12.483587 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-federate-client-tls\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.483736 master-0 kubenswrapper[27820]: I0320 08:54:12.483697 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-telemeter-client-tls\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.484661 master-0 kubenswrapper[27820]: I0320 08:54:12.484627 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/7f15ea03-44e8-40ae-959d-9cca287d76c9-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.494114 master-0 kubenswrapper[27820]: I0320 08:54:12.494069 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99bw\" (UniqueName: \"kubernetes.io/projected/7f15ea03-44e8-40ae-959d-9cca287d76c9-kube-api-access-v99bw\") pod \"telemeter-client-69449d79f9-kr2pv\" (UID: \"7f15ea03-44e8-40ae-959d-9cca287d76c9\") " pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.673123 master-0 kubenswrapper[27820]: I0320 08:54:12.673071 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" Mar 20 08:54:12.936152 master-0 kubenswrapper[27820]: I0320 08:54:12.936100 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn"] Mar 20 08:54:12.938116 master-0 kubenswrapper[27820]: W0320 08:54:12.938080 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83bf09c7_7ad7_4f3c_8e55_b154af674183.slice/crio-2a4fdffea04022ae216d790021b4243e65e3147e321b7966f2f49eee778286de WatchSource:0}: Error finding container 2a4fdffea04022ae216d790021b4243e65e3147e321b7966f2f49eee778286de: Status 404 returned error can't find the container with id 2a4fdffea04022ae216d790021b4243e65e3147e321b7966f2f49eee778286de Mar 20 08:54:13.078760 master-0 kubenswrapper[27820]: I0320 08:54:13.078719 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-69449d79f9-kr2pv"] Mar 20 08:54:13.602292 master-0 kubenswrapper[27820]: I0320 08:54:13.598383 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:54:13.611949 master-0 kubenswrapper[27820]: I0320 08:54:13.611895 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.633283 master-0 kubenswrapper[27820]: I0320 08:54:13.631647 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" exitCode=0 Mar 20 08:54:13.633283 master-0 kubenswrapper[27820]: I0320 08:54:13.631781 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} Mar 20 08:54:13.652123 master-0 kubenswrapper[27820]: I0320 08:54:13.651809 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 20 08:54:13.652795 master-0 kubenswrapper[27820]: I0320 08:54:13.652752 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 20 08:54:13.661617 master-0 kubenswrapper[27820]: I0320 08:54:13.661580 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 20 08:54:13.661885 master-0 kubenswrapper[27820]: I0320 08:54:13.661866 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 20 08:54:13.662073 master-0 kubenswrapper[27820]: I0320 08:54:13.662045 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 20 08:54:13.662249 master-0 kubenswrapper[27820]: I0320 08:54:13.662233 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 20 08:54:13.662369 master-0 kubenswrapper[27820]: I0320 08:54:13.662353 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 20 08:54:13.662527 master-0 kubenswrapper[27820]: I0320 08:54:13.662496 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 20 08:54:13.663081 master-0 kubenswrapper[27820]: I0320 08:54:13.663055 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 20 08:54:13.663177 master-0 kubenswrapper[27820]: I0320 08:54:13.663162 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-chhohvmqogrio" Mar 20 08:54:13.667231 master-0 kubenswrapper[27820]: I0320 08:54:13.667181 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" event={"ID":"83bf09c7-7ad7-4f3c-8e55-b154af674183","Type":"ContainerStarted","Data":"17657bea27a72be845130a08735ab96f177ad78e42f911fb03009abca5443fc6"} Mar 20 08:54:13.667368 master-0 kubenswrapper[27820]: I0320 08:54:13.667234 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" event={"ID":"83bf09c7-7ad7-4f3c-8e55-b154af674183","Type":"ContainerStarted","Data":"2a4fdffea04022ae216d790021b4243e65e3147e321b7966f2f49eee778286de"} Mar 20 08:54:13.668332 master-0 kubenswrapper[27820]: I0320 08:54:13.668307 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 20 08:54:13.674045 master-0 kubenswrapper[27820]: I0320 08:54:13.673373 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 20 08:54:13.682770 master-0 kubenswrapper[27820]: I0320 08:54:13.682681 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709168 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709284 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709333 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709350 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709389 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709426 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709445 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709477 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709507 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709524 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709546 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709572 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709606 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82ww5\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-kube-api-access-82ww5\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709624 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-config-out\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709641 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-config\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709656 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709676 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-web-config\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.722615 master-0 kubenswrapper[27820]: I0320 08:54:13.709703 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.769345 master-0 kubenswrapper[27820]: I0320 08:54:13.769245 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" podStartSLOduration=1.769223165 podStartE2EDuration="1.769223165s" podCreationTimestamp="2026-03-20 08:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:13.762644865 +0000 UTC m=+263.857854029" watchObservedRunningTime="2026-03-20 08:54:13.769223165 +0000 UTC m=+263.864432319" Mar 20 08:54:13.811083 master-0 kubenswrapper[27820]: I0320 08:54:13.811018 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.811307 master-0 kubenswrapper[27820]: I0320 08:54:13.811152 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.811307 master-0 kubenswrapper[27820]: I0320 08:54:13.811207 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.811307 master-0 kubenswrapper[27820]: I0320 08:54:13.811239 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.812414 master-0 kubenswrapper[27820]: I0320 08:54:13.812372 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.812707 master-0 kubenswrapper[27820]: I0320 08:54:13.812506 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.812783 master-0 kubenswrapper[27820]: I0320 08:54:13.812757 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.812826 master-0 kubenswrapper[27820]: I0320 08:54:13.812786 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.812881 master-0 kubenswrapper[27820]: I0320 08:54:13.812861 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.812920 master-0 kubenswrapper[27820]: I0320 08:54:13.812906 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813012 master-0 kubenswrapper[27820]: I0320 08:54:13.812993 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82ww5\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-kube-api-access-82ww5\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813054 master-0 kubenswrapper[27820]: I0320 08:54:13.813031 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-config-out\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813089 master-0 kubenswrapper[27820]: I0320 08:54:13.813057 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-config\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813089 master-0 kubenswrapper[27820]: I0320 08:54:13.813081 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813142 master-0 kubenswrapper[27820]: I0320 08:54:13.813113 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-web-config\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813172 master-0 kubenswrapper[27820]: I0320 08:54:13.813144 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813256 master-0 kubenswrapper[27820]: I0320 08:54:13.813235 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813327 master-0 kubenswrapper[27820]: I0320 08:54:13.813242 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.813327 master-0 kubenswrapper[27820]: I0320 08:54:13.813313 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.814481 master-0 kubenswrapper[27820]: I0320 08:54:13.813955 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.814481 master-0 kubenswrapper[27820]: I0320 08:54:13.814032 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.814481 master-0 kubenswrapper[27820]: I0320 08:54:13.814035 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.816425 master-0 kubenswrapper[27820]: I0320 08:54:13.816389 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.818070 master-0 kubenswrapper[27820]: I0320 08:54:13.817769 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-config\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.818357 master-0 kubenswrapper[27820]: I0320 08:54:13.818317 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.818357 master-0 kubenswrapper[27820]: I0320 08:54:13.818342 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.820065 master-0 kubenswrapper[27820]: I0320 08:54:13.819339 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.820929 master-0 kubenswrapper[27820]: I0320 08:54:13.820901 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-web-config\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.823107 master-0 kubenswrapper[27820]: I0320 08:54:13.823060 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.826318 master-0 kubenswrapper[27820]: I0320 08:54:13.824457 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.826318 master-0 kubenswrapper[27820]: I0320 08:54:13.824801 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.826318 master-0 kubenswrapper[27820]: I0320 08:54:13.825084 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.826318 master-0 kubenswrapper[27820]: I0320 08:54:13.825992 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.828604 master-0 kubenswrapper[27820]: I0320 08:54:13.828573 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-config-out\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.842835 master-0 kubenswrapper[27820]: I0320 08:54:13.842751 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.877622 master-0 kubenswrapper[27820]: I0320 08:54:13.877084 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82ww5\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-kube-api-access-82ww5\") pod \"prometheus-k8s-0\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:13.993153 master-0 kubenswrapper[27820]: I0320 08:54:13.993071 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:14.679553 master-0 kubenswrapper[27820]: I0320 08:54:14.679474 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" event={"ID":"7f15ea03-44e8-40ae-959d-9cca287d76c9","Type":"ContainerStarted","Data":"1d7085a65162f0e76b0c623f55363e1290aa8f2c4481b74f7b7685a07fd9198f"} Mar 20 08:54:15.507411 master-0 kubenswrapper[27820]: I0320 08:54:15.506333 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:54:15.646452 master-0 kubenswrapper[27820]: I0320 08:54:15.646044 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:15.646452 master-0 kubenswrapper[27820]: I0320 08:54:15.646201 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:15.654899 master-0 kubenswrapper[27820]: I0320 08:54:15.654842 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:15.734129 master-0 kubenswrapper[27820]: I0320 08:54:15.734000 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"cd1c889aca1cf909017432c57d82a5881c0033f7ca72ef810433e0f8f9b58008"} Mar 20 08:54:15.739738 master-0 kubenswrapper[27820]: I0320 08:54:15.739670 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:16.743657 master-0 kubenswrapper[27820]: I0320 08:54:16.743591 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="43e28d2d559547a86904f6babdf3bc2abb1ff2664a471ccbe739a35a3a6ac383" exitCode=0 Mar 20 08:54:16.744164 master-0 kubenswrapper[27820]: I0320 08:54:16.743710 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"43e28d2d559547a86904f6babdf3bc2abb1ff2664a471ccbe739a35a3a6ac383"} Mar 20 08:54:16.749395 master-0 kubenswrapper[27820]: I0320 08:54:16.749360 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"332abbdd4459574f82b80abd029b39359a4bf432c9a6ccd0f5ed6283018acefd"} Mar 20 08:54:16.749395 master-0 kubenswrapper[27820]: I0320 08:54:16.749392 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"7716252d223e9da18bbc3958a79fe98ba9b5b173878972808f0a61c1b097d9b2"} Mar 20 08:54:16.749534 master-0 kubenswrapper[27820]: I0320 08:54:16.749404 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"f9f41b8693d8e146f7c236bba0e7ebacd50c90f5a1cb6be5d94971a2d9d68f01"} Mar 20 08:54:18.765639 master-0 kubenswrapper[27820]: I0320 08:54:18.765581 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" event={"ID":"7f15ea03-44e8-40ae-959d-9cca287d76c9","Type":"ContainerStarted","Data":"d5f42b3fa1b530b48725b8bb48e1c34bc323b0058930592d63f0d0a7759eafa9"} Mar 20 08:54:18.765639 master-0 kubenswrapper[27820]: I0320 08:54:18.765627 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" event={"ID":"7f15ea03-44e8-40ae-959d-9cca287d76c9","Type":"ContainerStarted","Data":"84fc333fc8ee6ff1e9b4c06fa3cf9fbf85e88af0a40f83386acae10a3d969799"} Mar 20 08:54:18.765639 master-0 kubenswrapper[27820]: I0320 08:54:18.765638 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" event={"ID":"7f15ea03-44e8-40ae-959d-9cca287d76c9","Type":"ContainerStarted","Data":"506e0dc3be95e8145e36e9c3c8f8a42408807c2acc9cf69b1eda12ff0ef7c70c"} Mar 20 08:54:18.771182 master-0 kubenswrapper[27820]: I0320 08:54:18.771116 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"8f0ac12d635cf8076387119fc5a087c2a579901027a9273748b2113b81eb8e7a"} Mar 20 08:54:18.771182 master-0 kubenswrapper[27820]: I0320 08:54:18.771179 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"2cb426aede5b5808a7108581e265f6d8d80db5872ddbb165f63baa3f39d05772"} Mar 20 08:54:18.775838 master-0 kubenswrapper[27820]: I0320 08:54:18.775808 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} Mar 20 08:54:18.775918 master-0 kubenswrapper[27820]: I0320 08:54:18.775841 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} Mar 20 08:54:18.775918 master-0 kubenswrapper[27820]: I0320 08:54:18.775854 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} Mar 20 08:54:18.775918 master-0 kubenswrapper[27820]: I0320 08:54:18.775862 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} Mar 20 08:54:18.793118 master-0 kubenswrapper[27820]: I0320 08:54:18.789335 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-69449d79f9-kr2pv" podStartSLOduration=3.5463204680000002 podStartE2EDuration="6.789316447s" podCreationTimestamp="2026-03-20 08:54:12 +0000 UTC" firstStartedPulling="2026-03-20 08:54:14.614958945 +0000 UTC m=+264.710168119" lastFinishedPulling="2026-03-20 08:54:17.857954954 +0000 UTC m=+267.953164098" observedRunningTime="2026-03-20 08:54:18.787752984 +0000 UTC m=+268.882962138" watchObservedRunningTime="2026-03-20 08:54:18.789316447 +0000 UTC m=+268.884525601" Mar 20 08:54:19.792760 master-0 kubenswrapper[27820]: I0320 08:54:19.792680 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" event={"ID":"06e28acf-6ec7-4e0d-bb87-6577b30f7c35","Type":"ContainerStarted","Data":"a8779df65eb6ab4ca47d96d74bc73d7200f68459c4612c0a9a3a096380667d1d"} Mar 20 08:54:19.794439 master-0 kubenswrapper[27820]: I0320 08:54:19.794391 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:19.804293 master-0 kubenswrapper[27820]: I0320 08:54:19.802542 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} Mar 20 08:54:19.804293 master-0 kubenswrapper[27820]: I0320 08:54:19.802623 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerStarted","Data":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} Mar 20 08:54:19.832678 master-0 kubenswrapper[27820]: I0320 08:54:19.832299 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" podStartSLOduration=3.711007547 podStartE2EDuration="10.832253937s" podCreationTimestamp="2026-03-20 08:54:09 +0000 UTC" firstStartedPulling="2026-03-20 08:54:11.301654199 +0000 UTC m=+261.396863333" lastFinishedPulling="2026-03-20 08:54:18.422900579 +0000 UTC m=+268.518109723" observedRunningTime="2026-03-20 08:54:19.818694187 +0000 UTC m=+269.913903351" watchObservedRunningTime="2026-03-20 08:54:19.832253937 +0000 UTC m=+269.927463101" Mar 20 08:54:19.860043 master-0 kubenswrapper[27820]: I0320 08:54:19.856812 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=5.152180991 podStartE2EDuration="11.856787555s" podCreationTimestamp="2026-03-20 08:54:08 +0000 UTC" firstStartedPulling="2026-03-20 08:54:11.148226891 +0000 UTC m=+261.243436035" lastFinishedPulling="2026-03-20 08:54:17.852833455 +0000 UTC m=+267.948042599" observedRunningTime="2026-03-20 08:54:19.851017788 +0000 UTC m=+269.946226952" watchObservedRunningTime="2026-03-20 08:54:19.856787555 +0000 UTC m=+269.951996699" Mar 20 08:54:20.668484 master-0 kubenswrapper[27820]: I0320 08:54:20.668373 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:20.669294 master-0 kubenswrapper[27820]: I0320 08:54:20.669247 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:20.673994 master-0 kubenswrapper[27820]: I0320 08:54:20.673869 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:20.815500 master-0 kubenswrapper[27820]: I0320 08:54:20.815442 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:54:20.894200 master-0 kubenswrapper[27820]: I0320 08:54:20.893940 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75bd5bfbf6-b7f22"] Mar 20 08:54:21.819769 master-0 kubenswrapper[27820]: I0320 08:54:21.819712 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"4fa647f70c741545ab9c6fd11ee6ef71d990fb96ac5efa985b078bf6f67ed15c"} Mar 20 08:54:21.827898 master-0 kubenswrapper[27820]: I0320 08:54:21.819775 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"29d61d36907cd792218d99015015c61a483a9531d4d6a5432ef0ef7344f492ab"} Mar 20 08:54:21.827898 master-0 kubenswrapper[27820]: I0320 08:54:21.819793 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"ffa0db3ca8b7a30386de739861dbfea7ad49b219fdc1904880807bc56fec6ea7"} Mar 20 08:54:21.827898 master-0 kubenswrapper[27820]: I0320 08:54:21.819805 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"4174f4881783f2e6f439d3784afddbc483a2f56e5c632fe86b539c94673d3c75"} Mar 20 08:54:21.828240 master-0 kubenswrapper[27820]: I0320 08:54:21.828197 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-7b58769b45-q7j7f" Mar 20 08:54:22.839767 master-0 kubenswrapper[27820]: I0320 08:54:22.839696 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"d8dd2fefdf7a08ead086036cbc63e7e1658904b7bad0550a0696d3daae3feca7"} Mar 20 08:54:22.839767 master-0 kubenswrapper[27820]: I0320 08:54:22.839762 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerStarted","Data":"f496e0a692176dee4f6a3d62bbbd64632de403723365bbbf4805097f09605bb9"} Mar 20 08:54:22.883714 master-0 kubenswrapper[27820]: I0320 08:54:22.883613 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=5.53495334 podStartE2EDuration="9.883594478s" podCreationTimestamp="2026-03-20 08:54:13 +0000 UTC" firstStartedPulling="2026-03-20 08:54:16.74653736 +0000 UTC m=+266.841746504" lastFinishedPulling="2026-03-20 08:54:21.095178498 +0000 UTC m=+271.190387642" observedRunningTime="2026-03-20 08:54:22.878018736 +0000 UTC m=+272.973227890" watchObservedRunningTime="2026-03-20 08:54:22.883594478 +0000 UTC m=+272.978803622" Mar 20 08:54:23.011495 master-0 kubenswrapper[27820]: E0320 08:54:23.011451 27820 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: configmap "prometheus-k8s-rulefiles-0" not found Mar 20 08:54:23.011798 master-0 kubenswrapper[27820]: E0320 08:54:23.011778 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0 podName:c9917efe-9886-4199-b78f-cb3ed320bff7 nodeName:}" failed. No retries permitted until 2026-03-20 08:54:23.511759018 +0000 UTC m=+273.606968172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7") : configmap "prometheus-k8s-rulefiles-0" not found Mar 20 08:54:23.993954 master-0 kubenswrapper[27820]: I0320 08:54:23.993912 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:54:25.928209 master-0 kubenswrapper[27820]: I0320 08:54:25.928141 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7467fcc69-2tx6g"] Mar 20 08:54:25.931096 master-0 kubenswrapper[27820]: I0320 08:54:25.931061 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:25.955330 master-0 kubenswrapper[27820]: I0320 08:54:25.948826 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7467fcc69-2tx6g"] Mar 20 08:54:26.067675 master-0 kubenswrapper[27820]: I0320 08:54:26.067618 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-oauth-serving-cert\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.067675 master-0 kubenswrapper[27820]: I0320 08:54:26.067667 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-config\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.068036 master-0 kubenswrapper[27820]: I0320 08:54:26.067698 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-trusted-ca-bundle\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.068036 master-0 kubenswrapper[27820]: I0320 08:54:26.067742 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-oauth-config\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.068036 master-0 kubenswrapper[27820]: I0320 08:54:26.067787 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-service-ca\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.068036 master-0 kubenswrapper[27820]: I0320 08:54:26.067804 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-serving-cert\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.068036 master-0 kubenswrapper[27820]: I0320 08:54:26.067852 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm64s\" (UniqueName: \"kubernetes.io/projected/56f17a00-2a28-4406-84ed-40a2a5eecd15-kube-api-access-rm64s\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.169661 master-0 kubenswrapper[27820]: I0320 08:54:26.169587 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-oauth-serving-cert\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.169895 master-0 kubenswrapper[27820]: I0320 08:54:26.169670 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-config\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.169895 master-0 kubenswrapper[27820]: I0320 08:54:26.169744 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-trusted-ca-bundle\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.169895 master-0 kubenswrapper[27820]: I0320 08:54:26.169805 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-oauth-config\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.170047 master-0 kubenswrapper[27820]: I0320 08:54:26.169911 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-service-ca\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.170047 master-0 kubenswrapper[27820]: I0320 08:54:26.169946 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-serving-cert\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.170047 master-0 kubenswrapper[27820]: I0320 08:54:26.170008 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm64s\" (UniqueName: \"kubernetes.io/projected/56f17a00-2a28-4406-84ed-40a2a5eecd15-kube-api-access-rm64s\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.175742 master-0 kubenswrapper[27820]: I0320 08:54:26.172048 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-oauth-serving-cert\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.175742 master-0 kubenswrapper[27820]: I0320 08:54:26.173209 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-config\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.175742 master-0 kubenswrapper[27820]: I0320 08:54:26.175710 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-trusted-ca-bundle\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.177553 master-0 kubenswrapper[27820]: I0320 08:54:26.177504 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-service-ca\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.179875 master-0 kubenswrapper[27820]: I0320 08:54:26.179797 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-serving-cert\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.181993 master-0 kubenswrapper[27820]: I0320 08:54:26.181961 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-b45dccc8f-nt7jw"] Mar 20 08:54:26.185410 master-0 kubenswrapper[27820]: I0320 08:54:26.183517 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-oauth-config\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.211590 master-0 kubenswrapper[27820]: I0320 08:54:26.211553 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm64s\" (UniqueName: \"kubernetes.io/projected/56f17a00-2a28-4406-84ed-40a2a5eecd15-kube-api-access-rm64s\") pod \"console-7467fcc69-2tx6g\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:26.268330 master-0 kubenswrapper[27820]: I0320 08:54:26.268282 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:32.401054 master-0 kubenswrapper[27820]: I0320 08:54:32.400983 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:32.401663 master-0 kubenswrapper[27820]: I0320 08:54:32.401078 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:35.840835 master-0 kubenswrapper[27820]: I0320 08:54:35.840768 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:54:35.841409 master-0 kubenswrapper[27820]: E0320 08:54:35.840950 27820 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:54:35.841409 master-0 kubenswrapper[27820]: E0320 08:54:35.841030 27820 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-3-retry-1-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:54:35.841409 master-0 kubenswrapper[27820]: E0320 08:54:35.841091 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access podName:75cef5aa-93e6-4b8b-9ab1-06809e85883a nodeName:}" failed. No retries permitted until 2026-03-20 08:56:37.841073024 +0000 UTC m=+407.936282168 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access") pod "installer-3-retry-1-master-0" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 20 08:54:40.022538 master-0 kubenswrapper[27820]: I0320 08:54:40.021185 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7467fcc69-2tx6g"] Mar 20 08:54:42.014144 master-0 kubenswrapper[27820]: I0320 08:54:42.014067 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7467fcc69-2tx6g" event={"ID":"56f17a00-2a28-4406-84ed-40a2a5eecd15","Type":"ContainerStarted","Data":"3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28"} Mar 20 08:54:42.014144 master-0 kubenswrapper[27820]: I0320 08:54:42.014152 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7467fcc69-2tx6g" event={"ID":"56f17a00-2a28-4406-84ed-40a2a5eecd15","Type":"ContainerStarted","Data":"b192616324d89671cc76043aa3c91511b9eaea6bd421ac5d8924f945e7a0fbed"} Mar 20 08:54:42.017428 master-0 kubenswrapper[27820]: I0320 08:54:42.017368 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-66b8ffb895-8rz5f" event={"ID":"66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9","Type":"ContainerStarted","Data":"a7482103d46257718e3b06dbb161ec21d9492ea7e2dcc75f3ec4daa999a49b9c"} Mar 20 08:54:42.781051 master-0 kubenswrapper[27820]: I0320 08:54:42.780958 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7467fcc69-2tx6g" podStartSLOduration=17.780938203 podStartE2EDuration="17.780938203s" podCreationTimestamp="2026-03-20 08:54:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:42.774296013 +0000 UTC m=+292.869505197" watchObservedRunningTime="2026-03-20 08:54:42.780938203 +0000 UTC m=+292.876147367" Mar 20 08:54:43.026967 master-0 kubenswrapper[27820]: I0320 08:54:43.026908 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:43.029802 master-0 kubenswrapper[27820]: I0320 08:54:43.029753 27820 patch_prober.go:28] interesting pod/downloads-66b8ffb895-8rz5f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" start-of-body= Mar 20 08:54:43.029878 master-0 kubenswrapper[27820]: I0320 08:54:43.029808 27820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-8rz5f" podUID="66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" Mar 20 08:54:43.744254 master-0 kubenswrapper[27820]: I0320 08:54:43.744066 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-66b8ffb895-8rz5f" podStartSLOduration=3.186655332 podStartE2EDuration="41.744032909s" podCreationTimestamp="2026-03-20 08:54:02 +0000 UTC" firstStartedPulling="2026-03-20 08:54:03.021300997 +0000 UTC m=+253.116510141" lastFinishedPulling="2026-03-20 08:54:41.578678554 +0000 UTC m=+291.673887718" observedRunningTime="2026-03-20 08:54:43.734919231 +0000 UTC m=+293.830128455" watchObservedRunningTime="2026-03-20 08:54:43.744032909 +0000 UTC m=+293.839242103" Mar 20 08:54:44.035872 master-0 kubenswrapper[27820]: I0320 08:54:44.035696 27820 patch_prober.go:28] interesting pod/downloads-66b8ffb895-8rz5f container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" start-of-body= Mar 20 08:54:44.035872 master-0 kubenswrapper[27820]: I0320 08:54:44.035796 27820 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-66b8ffb895-8rz5f" podUID="66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.97:8080/\": dial tcp 10.128.0.97:8080: connect: connection refused" Mar 20 08:54:45.940114 master-0 kubenswrapper[27820]: I0320 08:54:45.940013 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-75bd5bfbf6-b7f22" podUID="aae65814-71b3-40b5-be46-7bf04aa6aa58" containerName="console" containerID="cri-o://5a6b6e9ed5aad333e6c7fc2a317674bca57e1c285454b138600bc8a3fddef6e8" gracePeriod=15 Mar 20 08:54:46.269939 master-0 kubenswrapper[27820]: I0320 08:54:46.269778 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:46.269939 master-0 kubenswrapper[27820]: I0320 08:54:46.269862 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:54:46.272340 master-0 kubenswrapper[27820]: I0320 08:54:46.272245 27820 patch_prober.go:28] interesting pod/console-7467fcc69-2tx6g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Mar 20 08:54:46.272436 master-0 kubenswrapper[27820]: I0320 08:54:46.272383 27820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7467fcc69-2tx6g" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Mar 20 08:54:47.057344 master-0 kubenswrapper[27820]: I0320 08:54:47.057305 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75bd5bfbf6-b7f22_aae65814-71b3-40b5-be46-7bf04aa6aa58/console/0.log" Mar 20 08:54:47.057658 master-0 kubenswrapper[27820]: I0320 08:54:47.057349 27820 generic.go:334] "Generic (PLEG): container finished" podID="aae65814-71b3-40b5-be46-7bf04aa6aa58" containerID="5a6b6e9ed5aad333e6c7fc2a317674bca57e1c285454b138600bc8a3fddef6e8" exitCode=2 Mar 20 08:54:47.057658 master-0 kubenswrapper[27820]: I0320 08:54:47.057377 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bd5bfbf6-b7f22" event={"ID":"aae65814-71b3-40b5-be46-7bf04aa6aa58","Type":"ContainerDied","Data":"5a6b6e9ed5aad333e6c7fc2a317674bca57e1c285454b138600bc8a3fddef6e8"} Mar 20 08:54:47.098127 master-0 kubenswrapper[27820]: I0320 08:54:47.098091 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75bd5bfbf6-b7f22_aae65814-71b3-40b5-be46-7bf04aa6aa58/console/0.log" Mar 20 08:54:47.098225 master-0 kubenswrapper[27820]: I0320 08:54:47.098156 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:47.252642 master-0 kubenswrapper[27820]: I0320 08:54:47.252579 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-oauth-config\") pod \"aae65814-71b3-40b5-be46-7bf04aa6aa58\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " Mar 20 08:54:47.252642 master-0 kubenswrapper[27820]: I0320 08:54:47.252646 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9xc2\" (UniqueName: \"kubernetes.io/projected/aae65814-71b3-40b5-be46-7bf04aa6aa58-kube-api-access-z9xc2\") pod \"aae65814-71b3-40b5-be46-7bf04aa6aa58\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " Mar 20 08:54:47.252950 master-0 kubenswrapper[27820]: I0320 08:54:47.252687 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-oauth-serving-cert\") pod \"aae65814-71b3-40b5-be46-7bf04aa6aa58\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " Mar 20 08:54:47.252950 master-0 kubenswrapper[27820]: I0320 08:54:47.252721 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-serving-cert\") pod \"aae65814-71b3-40b5-be46-7bf04aa6aa58\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " Mar 20 08:54:47.252950 master-0 kubenswrapper[27820]: I0320 08:54:47.252810 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-service-ca\") pod \"aae65814-71b3-40b5-be46-7bf04aa6aa58\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " Mar 20 08:54:47.252950 master-0 kubenswrapper[27820]: I0320 08:54:47.252869 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-config\") pod \"aae65814-71b3-40b5-be46-7bf04aa6aa58\" (UID: \"aae65814-71b3-40b5-be46-7bf04aa6aa58\") " Mar 20 08:54:47.253959 master-0 kubenswrapper[27820]: I0320 08:54:47.253862 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "aae65814-71b3-40b5-be46-7bf04aa6aa58" (UID: "aae65814-71b3-40b5-be46-7bf04aa6aa58"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:47.254188 master-0 kubenswrapper[27820]: I0320 08:54:47.254128 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-service-ca" (OuterVolumeSpecName: "service-ca") pod "aae65814-71b3-40b5-be46-7bf04aa6aa58" (UID: "aae65814-71b3-40b5-be46-7bf04aa6aa58"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:47.254234 master-0 kubenswrapper[27820]: I0320 08:54:47.254159 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-config" (OuterVolumeSpecName: "console-config") pod "aae65814-71b3-40b5-be46-7bf04aa6aa58" (UID: "aae65814-71b3-40b5-be46-7bf04aa6aa58"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:47.258124 master-0 kubenswrapper[27820]: I0320 08:54:47.258060 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "aae65814-71b3-40b5-be46-7bf04aa6aa58" (UID: "aae65814-71b3-40b5-be46-7bf04aa6aa58"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:47.258198 master-0 kubenswrapper[27820]: I0320 08:54:47.258157 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "aae65814-71b3-40b5-be46-7bf04aa6aa58" (UID: "aae65814-71b3-40b5-be46-7bf04aa6aa58"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:47.258388 master-0 kubenswrapper[27820]: I0320 08:54:47.258312 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae65814-71b3-40b5-be46-7bf04aa6aa58-kube-api-access-z9xc2" (OuterVolumeSpecName: "kube-api-access-z9xc2") pod "aae65814-71b3-40b5-be46-7bf04aa6aa58" (UID: "aae65814-71b3-40b5-be46-7bf04aa6aa58"). InnerVolumeSpecName "kube-api-access-z9xc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:47.356490 master-0 kubenswrapper[27820]: I0320 08:54:47.356320 27820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:47.356490 master-0 kubenswrapper[27820]: I0320 08:54:47.356380 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9xc2\" (UniqueName: \"kubernetes.io/projected/aae65814-71b3-40b5-be46-7bf04aa6aa58-kube-api-access-z9xc2\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:47.356490 master-0 kubenswrapper[27820]: I0320 08:54:47.356401 27820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:47.356490 master-0 kubenswrapper[27820]: I0320 08:54:47.356419 27820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:47.356490 master-0 kubenswrapper[27820]: I0320 08:54:47.356438 27820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:47.356490 master-0 kubenswrapper[27820]: I0320 08:54:47.356457 27820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/aae65814-71b3-40b5-be46-7bf04aa6aa58-console-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:48.069821 master-0 kubenswrapper[27820]: I0320 08:54:48.069727 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-75bd5bfbf6-b7f22_aae65814-71b3-40b5-be46-7bf04aa6aa58/console/0.log" Mar 20 08:54:48.071195 master-0 kubenswrapper[27820]: I0320 08:54:48.069863 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-75bd5bfbf6-b7f22" event={"ID":"aae65814-71b3-40b5-be46-7bf04aa6aa58","Type":"ContainerDied","Data":"ea7de601f0bd3b1a5193522752a7f4584e9dac65217e6054f00656c26e9b79e2"} Mar 20 08:54:48.071195 master-0 kubenswrapper[27820]: I0320 08:54:48.069935 27820 scope.go:117] "RemoveContainer" containerID="5a6b6e9ed5aad333e6c7fc2a317674bca57e1c285454b138600bc8a3fddef6e8" Mar 20 08:54:48.071195 master-0 kubenswrapper[27820]: I0320 08:54:48.069940 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-75bd5bfbf6-b7f22" Mar 20 08:54:48.928030 master-0 kubenswrapper[27820]: I0320 08:54:48.927880 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-75bd5bfbf6-b7f22"] Mar 20 08:54:49.141576 master-0 kubenswrapper[27820]: I0320 08:54:49.141500 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-75bd5bfbf6-b7f22"] Mar 20 08:54:50.067546 master-0 kubenswrapper[27820]: I0320 08:54:50.067473 27820 kubelet.go:1505] "Image garbage collection succeeded" Mar 20 08:54:50.087978 master-0 kubenswrapper[27820]: I0320 08:54:50.087905 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aae65814-71b3-40b5-be46-7bf04aa6aa58" path="/var/lib/kubelet/pods/aae65814-71b3-40b5-be46-7bf04aa6aa58/volumes" Mar 20 08:54:51.227654 master-0 kubenswrapper[27820]: I0320 08:54:51.227520 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" podUID="5d487313-8796-4bf7-8ac5-051f76b021e5" containerName="oauth-openshift" containerID="cri-o://8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd" gracePeriod=15 Mar 20 08:54:51.680556 master-0 kubenswrapper[27820]: I0320 08:54:51.680523 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:54:51.723686 master-0 kubenswrapper[27820]: I0320 08:54:51.723599 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd"] Mar 20 08:54:51.723937 master-0 kubenswrapper[27820]: E0320 08:54:51.723928 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d487313-8796-4bf7-8ac5-051f76b021e5" containerName="oauth-openshift" Mar 20 08:54:51.724017 master-0 kubenswrapper[27820]: I0320 08:54:51.723942 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d487313-8796-4bf7-8ac5-051f76b021e5" containerName="oauth-openshift" Mar 20 08:54:51.724017 master-0 kubenswrapper[27820]: E0320 08:54:51.723976 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae65814-71b3-40b5-be46-7bf04aa6aa58" containerName="console" Mar 20 08:54:51.724017 master-0 kubenswrapper[27820]: I0320 08:54:51.723982 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae65814-71b3-40b5-be46-7bf04aa6aa58" containerName="console" Mar 20 08:54:51.724203 master-0 kubenswrapper[27820]: I0320 08:54:51.724190 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d487313-8796-4bf7-8ac5-051f76b021e5" containerName="oauth-openshift" Mar 20 08:54:51.724203 master-0 kubenswrapper[27820]: I0320 08:54:51.724201 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae65814-71b3-40b5-be46-7bf04aa6aa58" containerName="console" Mar 20 08:54:51.724784 master-0 kubenswrapper[27820]: I0320 08:54:51.724742 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.738574 master-0 kubenswrapper[27820]: I0320 08:54:51.738500 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd"] Mar 20 08:54:51.833305 master-0 kubenswrapper[27820]: I0320 08:54:51.833247 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-service-ca\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833305 master-0 kubenswrapper[27820]: I0320 08:54:51.833312 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-dir\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833600 master-0 kubenswrapper[27820]: I0320 08:54:51.833333 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-router-certs\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833600 master-0 kubenswrapper[27820]: I0320 08:54:51.833368 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqxcd\" (UniqueName: \"kubernetes.io/projected/5d487313-8796-4bf7-8ac5-051f76b021e5-kube-api-access-dqxcd\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833600 master-0 kubenswrapper[27820]: I0320 08:54:51.833400 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-provider-selection\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833600 master-0 kubenswrapper[27820]: I0320 08:54:51.833424 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-trusted-ca-bundle\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833600 master-0 kubenswrapper[27820]: I0320 08:54:51.833451 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-cliconfig\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833600 master-0 kubenswrapper[27820]: I0320 08:54:51.833547 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-session\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833862 master-0 kubenswrapper[27820]: I0320 08:54:51.833727 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-ocp-branding-template\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833862 master-0 kubenswrapper[27820]: I0320 08:54:51.833763 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-login\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833862 master-0 kubenswrapper[27820]: I0320 08:54:51.833805 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-serving-cert\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833862 master-0 kubenswrapper[27820]: I0320 08:54:51.833841 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-policies\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.833862 master-0 kubenswrapper[27820]: I0320 08:54:51.833858 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-error\") pod \"5d487313-8796-4bf7-8ac5-051f76b021e5\" (UID: \"5d487313-8796-4bf7-8ac5-051f76b021e5\") " Mar 20 08:54:51.834104 master-0 kubenswrapper[27820]: I0320 08:54:51.834072 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-service-ca\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834150 master-0 kubenswrapper[27820]: I0320 08:54:51.834117 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834150 master-0 kubenswrapper[27820]: I0320 08:54:51.834142 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834213 master-0 kubenswrapper[27820]: I0320 08:54:51.834160 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834213 master-0 kubenswrapper[27820]: I0320 08:54:51.834189 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34085a98-e268-499c-ab1b-f058add5cbfa-audit-dir\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834301 master-0 kubenswrapper[27820]: I0320 08:54:51.834215 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-login\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834301 master-0 kubenswrapper[27820]: I0320 08:54:51.834234 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-audit-policies\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834301 master-0 kubenswrapper[27820]: I0320 08:54:51.834292 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbkrp\" (UniqueName: \"kubernetes.io/projected/34085a98-e268-499c-ab1b-f058add5cbfa-kube-api-access-jbkrp\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834396 master-0 kubenswrapper[27820]: I0320 08:54:51.834314 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-error\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834396 master-0 kubenswrapper[27820]: I0320 08:54:51.834349 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834396 master-0 kubenswrapper[27820]: I0320 08:54:51.834385 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-session\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834490 master-0 kubenswrapper[27820]: I0320 08:54:51.834414 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.834490 master-0 kubenswrapper[27820]: I0320 08:54:51.834431 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-router-certs\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.835032 master-0 kubenswrapper[27820]: I0320 08:54:51.834983 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:54:51.835145 master-0 kubenswrapper[27820]: I0320 08:54:51.835115 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:51.835599 master-0 kubenswrapper[27820]: I0320 08:54:51.835554 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:51.835774 master-0 kubenswrapper[27820]: I0320 08:54:51.835689 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:51.836373 master-0 kubenswrapper[27820]: I0320 08:54:51.836336 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:51.837885 master-0 kubenswrapper[27820]: I0320 08:54:51.837822 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.837885 master-0 kubenswrapper[27820]: I0320 08:54:51.837844 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.837885 master-0 kubenswrapper[27820]: I0320 08:54:51.837852 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d487313-8796-4bf7-8ac5-051f76b021e5-kube-api-access-dqxcd" (OuterVolumeSpecName: "kube-api-access-dqxcd") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "kube-api-access-dqxcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:51.838016 master-0 kubenswrapper[27820]: I0320 08:54:51.837917 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.838252 master-0 kubenswrapper[27820]: I0320 08:54:51.838191 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.838962 master-0 kubenswrapper[27820]: I0320 08:54:51.838924 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.839131 master-0 kubenswrapper[27820]: I0320 08:54:51.839103 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.839417 master-0 kubenswrapper[27820]: I0320 08:54:51.839364 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5d487313-8796-4bf7-8ac5-051f76b021e5" (UID: "5d487313-8796-4bf7-8ac5-051f76b021e5"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.935522 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbkrp\" (UniqueName: \"kubernetes.io/projected/34085a98-e268-499c-ab1b-f058add5cbfa-kube-api-access-jbkrp\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.935571 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-error\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.935788 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936039 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-session\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936099 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936127 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-router-certs\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936162 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-service-ca\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936191 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936215 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936231 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936493 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34085a98-e268-499c-ab1b-f058add5cbfa-audit-dir\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936584 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-login\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.936736 master-0 kubenswrapper[27820]: I0320 08:54:51.936649 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/34085a98-e268-499c-ab1b-f058add5cbfa-audit-dir\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.936867 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-audit-policies\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.936952 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqxcd\" (UniqueName: \"kubernetes.io/projected/5d487313-8796-4bf7-8ac5-051f76b021e5-kube-api-access-dqxcd\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.936966 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937000 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937011 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937023 27820 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937034 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937047 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937058 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937071 27820 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5d487313-8796-4bf7-8ac5-051f76b021e5-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937085 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937100 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937115 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937129 27820 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5d487313-8796-4bf7-8ac5-051f76b021e5-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.937899 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-cliconfig\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.938221 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-service-ca\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.938655 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-audit-policies\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.939129 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-session\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.939164 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.939392 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-router-certs\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.939683 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-serving-cert\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.940347 master-0 kubenswrapper[27820]: I0320 08:54:51.939938 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.944918 master-0 kubenswrapper[27820]: I0320 08:54:51.941447 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-login\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.944918 master-0 kubenswrapper[27820]: I0320 08:54:51.942450 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.944918 master-0 kubenswrapper[27820]: I0320 08:54:51.943802 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/34085a98-e268-499c-ab1b-f058add5cbfa-v4-0-config-user-template-error\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:51.956943 master-0 kubenswrapper[27820]: I0320 08:54:51.956882 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbkrp\" (UniqueName: \"kubernetes.io/projected/34085a98-e268-499c-ab1b-f058add5cbfa-kube-api-access-jbkrp\") pod \"oauth-openshift-98d8fdfc5-dbdbd\" (UID: \"34085a98-e268-499c-ab1b-f058add5cbfa\") " pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:52.063042 master-0 kubenswrapper[27820]: I0320 08:54:52.062953 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:52.105379 master-0 kubenswrapper[27820]: I0320 08:54:52.104761 27820 generic.go:334] "Generic (PLEG): container finished" podID="5d487313-8796-4bf7-8ac5-051f76b021e5" containerID="8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd" exitCode=0 Mar 20 08:54:52.105379 master-0 kubenswrapper[27820]: I0320 08:54:52.104806 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" event={"ID":"5d487313-8796-4bf7-8ac5-051f76b021e5","Type":"ContainerDied","Data":"8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd"} Mar 20 08:54:52.105379 master-0 kubenswrapper[27820]: I0320 08:54:52.104831 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" event={"ID":"5d487313-8796-4bf7-8ac5-051f76b021e5","Type":"ContainerDied","Data":"2e6dda4f55c7d88a4d20a5f62a7493f8f34d0739573c38a4abbb062efb0b5502"} Mar 20 08:54:52.105379 master-0 kubenswrapper[27820]: I0320 08:54:52.104847 27820 scope.go:117] "RemoveContainer" containerID="8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd" Mar 20 08:54:52.105379 master-0 kubenswrapper[27820]: I0320 08:54:52.104954 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-b45dccc8f-nt7jw" Mar 20 08:54:52.134974 master-0 kubenswrapper[27820]: I0320 08:54:52.132962 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-b45dccc8f-nt7jw"] Mar 20 08:54:52.136152 master-0 kubenswrapper[27820]: I0320 08:54:52.136116 27820 scope.go:117] "RemoveContainer" containerID="8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd" Mar 20 08:54:52.136791 master-0 kubenswrapper[27820]: E0320 08:54:52.136757 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd\": container with ID starting with 8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd not found: ID does not exist" containerID="8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd" Mar 20 08:54:52.136887 master-0 kubenswrapper[27820]: I0320 08:54:52.136794 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd"} err="failed to get container status \"8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd\": rpc error: code = NotFound desc = could not find container \"8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd\": container with ID starting with 8be1d83a5080732d228b654fd239d76346c6a52f8900e3b16fceaf8686453dbd not found: ID does not exist" Mar 20 08:54:52.138262 master-0 kubenswrapper[27820]: I0320 08:54:52.138223 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-b45dccc8f-nt7jw"] Mar 20 08:54:52.412094 master-0 kubenswrapper[27820]: I0320 08:54:52.412022 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:52.423624 master-0 kubenswrapper[27820]: I0320 08:54:52.423560 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-64c7dbd4b9-vtfnn" Mar 20 08:54:52.495344 master-0 kubenswrapper[27820]: I0320 08:54:52.495280 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd"] Mar 20 08:54:52.547180 master-0 kubenswrapper[27820]: I0320 08:54:52.547121 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-66b8ffb895-8rz5f" Mar 20 08:54:53.117116 master-0 kubenswrapper[27820]: I0320 08:54:53.116999 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" event={"ID":"34085a98-e268-499c-ab1b-f058add5cbfa","Type":"ContainerStarted","Data":"34c9fb7a7bacd184ca4737ade624a0cb90a6d961a5e18186a831c98abcbed648"} Mar 20 08:54:53.117116 master-0 kubenswrapper[27820]: I0320 08:54:53.117103 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" event={"ID":"34085a98-e268-499c-ab1b-f058add5cbfa","Type":"ContainerStarted","Data":"49345d5d99f55e38c1ad91fddac1ae8858d5f37207780fe0cca3581d7b8b9d69"} Mar 20 08:54:53.117660 master-0 kubenswrapper[27820]: I0320 08:54:53.117460 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:53.128307 master-0 kubenswrapper[27820]: I0320 08:54:53.125956 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" Mar 20 08:54:53.152062 master-0 kubenswrapper[27820]: I0320 08:54:53.151930 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-98d8fdfc5-dbdbd" podStartSLOduration=27.151905497 podStartE2EDuration="27.151905497s" podCreationTimestamp="2026-03-20 08:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:53.146250013 +0000 UTC m=+303.241459177" watchObservedRunningTime="2026-03-20 08:54:53.151905497 +0000 UTC m=+303.247114651" Mar 20 08:54:54.090874 master-0 kubenswrapper[27820]: I0320 08:54:54.090780 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d487313-8796-4bf7-8ac5-051f76b021e5" path="/var/lib/kubelet/pods/5d487313-8796-4bf7-8ac5-051f76b021e5/volumes" Mar 20 08:54:56.270213 master-0 kubenswrapper[27820]: I0320 08:54:56.270106 27820 patch_prober.go:28] interesting pod/console-7467fcc69-2tx6g container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" start-of-body= Mar 20 08:54:56.270812 master-0 kubenswrapper[27820]: I0320 08:54:56.270214 27820 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7467fcc69-2tx6g" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerName="console" probeResult="failure" output="Get \"https://10.128.0.105:8443/health\": dial tcp 10.128.0.105:8443: connect: connection refused" Mar 20 08:55:06.282798 master-0 kubenswrapper[27820]: I0320 08:55:06.282734 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:55:06.286285 master-0 kubenswrapper[27820]: I0320 08:55:06.286216 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:55:06.389470 master-0 kubenswrapper[27820]: I0320 08:55:06.388456 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f98bb7c67-q7pqm"] Mar 20 08:55:13.994059 master-0 kubenswrapper[27820]: I0320 08:55:13.993893 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:14.032936 master-0 kubenswrapper[27820]: I0320 08:55:14.032880 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:14.319358 master-0 kubenswrapper[27820]: I0320 08:55:14.319230 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:27.717778 master-0 kubenswrapper[27820]: I0320 08:55:27.717704 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:55:27.718446 master-0 kubenswrapper[27820]: I0320 08:55:27.718127 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="alertmanager" containerID="cri-o://f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" gracePeriod=120 Mar 20 08:55:27.718446 master-0 kubenswrapper[27820]: I0320 08:55:27.718176 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-metric" containerID="cri-o://58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" gracePeriod=120 Mar 20 08:55:27.718446 master-0 kubenswrapper[27820]: I0320 08:55:27.718247 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="config-reloader" containerID="cri-o://be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" gracePeriod=120 Mar 20 08:55:27.718446 master-0 kubenswrapper[27820]: I0320 08:55:27.718256 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-web" containerID="cri-o://5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" gracePeriod=120 Mar 20 08:55:27.718446 master-0 kubenswrapper[27820]: I0320 08:55:27.718372 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy" containerID="cri-o://83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" gracePeriod=120 Mar 20 08:55:27.718446 master-0 kubenswrapper[27820]: I0320 08:55:27.718434 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="prom-label-proxy" containerID="cri-o://502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" gracePeriod=120 Mar 20 08:55:28.155898 master-0 kubenswrapper[27820]: I0320 08:55:28.155820 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.261996 master-0 kubenswrapper[27820]: I0320 08:55:28.261947 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-tls-assets\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.261996 master-0 kubenswrapper[27820]: I0320 08:55:28.261993 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zcn7\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-kube-api-access-4zcn7\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262020 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-main-tls\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262059 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-metric\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262077 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-volume\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262101 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-out\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262173 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-metrics-client-ca\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262193 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-trusted-ca-bundle\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262230 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-web\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262296 master-0 kubenswrapper[27820]: I0320 08:55:28.262251 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-web-config\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262742 master-0 kubenswrapper[27820]: I0320 08:55:28.262398 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-main-db\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.262742 master-0 kubenswrapper[27820]: I0320 08:55:28.262418 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy\") pod \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\" (UID: \"a8e6b860-8317-4fbe-9f43-41b0b707bc1b\") " Mar 20 08:55:28.263999 master-0 kubenswrapper[27820]: I0320 08:55:28.263929 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:28.264109 master-0 kubenswrapper[27820]: I0320 08:55:28.263987 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:28.264665 master-0 kubenswrapper[27820]: I0320 08:55:28.264583 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:28.265425 master-0 kubenswrapper[27820]: I0320 08:55:28.265391 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:28.265668 master-0 kubenswrapper[27820]: I0320 08:55:28.265583 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:28.266220 master-0 kubenswrapper[27820]: I0320 08:55:28.266153 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-volume" (OuterVolumeSpecName: "config-volume") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:28.266852 master-0 kubenswrapper[27820]: I0320 08:55:28.266814 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:28.267194 master-0 kubenswrapper[27820]: I0320 08:55:28.267146 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:28.268451 master-0 kubenswrapper[27820]: I0320 08:55:28.268409 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:28.268663 master-0 kubenswrapper[27820]: I0320 08:55:28.268625 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-out" (OuterVolumeSpecName: "config-out") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:28.272437 master-0 kubenswrapper[27820]: I0320 08:55:28.272370 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-kube-api-access-4zcn7" (OuterVolumeSpecName: "kube-api-access-4zcn7") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "kube-api-access-4zcn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:28.312043 master-0 kubenswrapper[27820]: I0320 08:55:28.311967 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-web-config" (OuterVolumeSpecName: "web-config") pod "a8e6b860-8317-4fbe-9f43-41b0b707bc1b" (UID: "a8e6b860-8317-4fbe-9f43-41b0b707bc1b"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364321 27820 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364392 27820 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364406 27820 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364417 27820 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-web-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364428 27820 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364439 27820 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364450 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zcn7\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-kube-api-access-4zcn7\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364460 27820 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364470 27820 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364480 27820 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364490 27820 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.364491 master-0 kubenswrapper[27820]: I0320 08:55:28.364499 27820 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/a8e6b860-8317-4fbe-9f43-41b0b707bc1b-config-out\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:28.429322 master-0 kubenswrapper[27820]: I0320 08:55:28.429212 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" exitCode=0 Mar 20 08:55:28.429322 master-0 kubenswrapper[27820]: I0320 08:55:28.429297 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" exitCode=0 Mar 20 08:55:28.429322 master-0 kubenswrapper[27820]: I0320 08:55:28.429313 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" exitCode=0 Mar 20 08:55:28.429322 master-0 kubenswrapper[27820]: I0320 08:55:28.429336 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" exitCode=0 Mar 20 08:55:28.429322 master-0 kubenswrapper[27820]: I0320 08:55:28.429348 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" exitCode=0 Mar 20 08:55:28.429322 master-0 kubenswrapper[27820]: I0320 08:55:28.429360 27820 generic.go:334] "Generic (PLEG): container finished" podID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" exitCode=0 Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429287 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429410 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429430 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429373 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429460 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429447 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429589 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429626 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} Mar 20 08:55:28.429968 master-0 kubenswrapper[27820]: I0320 08:55:28.429653 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"a8e6b860-8317-4fbe-9f43-41b0b707bc1b","Type":"ContainerDied","Data":"a51a6c6d6ea71c0cdf6b651033a8f4d9d4010cc213330dd350b3500fa8d0e302"} Mar 20 08:55:28.449605 master-0 kubenswrapper[27820]: I0320 08:55:28.449320 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.488836 master-0 kubenswrapper[27820]: I0320 08:55:28.488754 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:55:28.490863 master-0 kubenswrapper[27820]: I0320 08:55:28.490798 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.493992 master-0 kubenswrapper[27820]: I0320 08:55:28.493936 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.525063 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526589 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.526877 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="alertmanager" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526892 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="alertmanager" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.526902 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-metric" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526911 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-metric" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.526924 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="prom-label-proxy" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526931 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="prom-label-proxy" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.526944 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-web" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526951 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-web" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.526968 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="config-reloader" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526975 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="config-reloader" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.526990 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.526997 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: E0320 08:55:28.527015 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="init-config-reloader" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527023 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="init-config-reloader" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527161 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="alertmanager" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527221 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="config-reloader" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527233 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="prom-label-proxy" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527248 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-web" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527274 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy" Mar 20 08:55:28.528640 master-0 kubenswrapper[27820]: I0320 08:55:28.527288 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" containerName="kube-rbac-proxy-metric" Mar 20 08:55:28.530034 master-0 kubenswrapper[27820]: I0320 08:55:28.529724 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.537343 master-0 kubenswrapper[27820]: I0320 08:55:28.531529 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 20 08:55:28.537343 master-0 kubenswrapper[27820]: I0320 08:55:28.531630 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 20 08:55:28.537343 master-0 kubenswrapper[27820]: I0320 08:55:28.531954 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 20 08:55:28.537343 master-0 kubenswrapper[27820]: I0320 08:55:28.531973 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 20 08:55:28.537343 master-0 kubenswrapper[27820]: I0320 08:55:28.532512 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 20 08:55:28.537343 master-0 kubenswrapper[27820]: I0320 08:55:28.532641 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 20 08:55:28.541554 master-0 kubenswrapper[27820]: I0320 08:55:28.541494 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 20 08:55:28.548789 master-0 kubenswrapper[27820]: I0320 08:55:28.548719 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.560404 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571479 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs55p\" (UniqueName: \"kubernetes.io/projected/dc81c5cb-439d-4c3d-8a34-70e9035a846c-kube-api-access-vs55p\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571554 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571643 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc81c5cb-439d-4c3d-8a34-70e9035a846c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571679 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571728 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/dc81c5cb-439d-4c3d-8a34-70e9035a846c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571757 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-config-volume\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571789 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.575088 master-0 kubenswrapper[27820]: I0320 08:55:28.571980 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc81c5cb-439d-4c3d-8a34-70e9035a846c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.578198 master-0 kubenswrapper[27820]: I0320 08:55:28.576647 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-web-config\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.578198 master-0 kubenswrapper[27820]: I0320 08:55:28.576759 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.578198 master-0 kubenswrapper[27820]: I0320 08:55:28.576801 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc81c5cb-439d-4c3d-8a34-70e9035a846c-config-out\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.578198 master-0 kubenswrapper[27820]: I0320 08:55:28.576874 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc81c5cb-439d-4c3d-8a34-70e9035a846c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.578198 master-0 kubenswrapper[27820]: I0320 08:55:28.577766 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.592499 master-0 kubenswrapper[27820]: I0320 08:55:28.592458 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.619117 master-0 kubenswrapper[27820]: I0320 08:55:28.619048 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.634580 master-0 kubenswrapper[27820]: I0320 08:55:28.634523 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.635347 master-0 kubenswrapper[27820]: E0320 08:55:28.635299 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.635423 master-0 kubenswrapper[27820]: I0320 08:55:28.635344 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} err="failed to get container status \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" Mar 20 08:55:28.635423 master-0 kubenswrapper[27820]: I0320 08:55:28.635370 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.635936 master-0 kubenswrapper[27820]: E0320 08:55:28.635890 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.635985 master-0 kubenswrapper[27820]: I0320 08:55:28.635935 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} err="failed to get container status \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" Mar 20 08:55:28.635985 master-0 kubenswrapper[27820]: I0320 08:55:28.635964 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.636327 master-0 kubenswrapper[27820]: E0320 08:55:28.636257 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.636382 master-0 kubenswrapper[27820]: I0320 08:55:28.636335 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} err="failed to get container status \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" Mar 20 08:55:28.636382 master-0 kubenswrapper[27820]: I0320 08:55:28.636374 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.636798 master-0 kubenswrapper[27820]: E0320 08:55:28.636723 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.636844 master-0 kubenswrapper[27820]: I0320 08:55:28.636793 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} err="failed to get container status \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" Mar 20 08:55:28.636879 master-0 kubenswrapper[27820]: I0320 08:55:28.636850 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.637207 master-0 kubenswrapper[27820]: E0320 08:55:28.637163 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.637288 master-0 kubenswrapper[27820]: I0320 08:55:28.637199 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} err="failed to get container status \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" Mar 20 08:55:28.637288 master-0 kubenswrapper[27820]: I0320 08:55:28.637221 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.637731 master-0 kubenswrapper[27820]: E0320 08:55:28.637694 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.637781 master-0 kubenswrapper[27820]: I0320 08:55:28.637725 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} err="failed to get container status \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" Mar 20 08:55:28.637781 master-0 kubenswrapper[27820]: I0320 08:55:28.637744 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.638104 master-0 kubenswrapper[27820]: E0320 08:55:28.638067 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.638156 master-0 kubenswrapper[27820]: I0320 08:55:28.638101 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} err="failed to get container status \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" Mar 20 08:55:28.638156 master-0 kubenswrapper[27820]: I0320 08:55:28.638122 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.638535 master-0 kubenswrapper[27820]: I0320 08:55:28.638500 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} err="failed to get container status \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" Mar 20 08:55:28.638535 master-0 kubenswrapper[27820]: I0320 08:55:28.638527 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.638834 master-0 kubenswrapper[27820]: I0320 08:55:28.638796 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} err="failed to get container status \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" Mar 20 08:55:28.638834 master-0 kubenswrapper[27820]: I0320 08:55:28.638828 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.639136 master-0 kubenswrapper[27820]: I0320 08:55:28.639094 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} err="failed to get container status \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" Mar 20 08:55:28.639136 master-0 kubenswrapper[27820]: I0320 08:55:28.639130 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.639388 master-0 kubenswrapper[27820]: I0320 08:55:28.639359 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} err="failed to get container status \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" Mar 20 08:55:28.639388 master-0 kubenswrapper[27820]: I0320 08:55:28.639383 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.639628 master-0 kubenswrapper[27820]: I0320 08:55:28.639598 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} err="failed to get container status \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" Mar 20 08:55:28.639628 master-0 kubenswrapper[27820]: I0320 08:55:28.639620 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.639846 master-0 kubenswrapper[27820]: I0320 08:55:28.639810 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} err="failed to get container status \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" Mar 20 08:55:28.639846 master-0 kubenswrapper[27820]: I0320 08:55:28.639839 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.640041 master-0 kubenswrapper[27820]: I0320 08:55:28.640013 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} err="failed to get container status \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" Mar 20 08:55:28.640041 master-0 kubenswrapper[27820]: I0320 08:55:28.640032 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.640290 master-0 kubenswrapper[27820]: I0320 08:55:28.640211 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} err="failed to get container status \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" Mar 20 08:55:28.640325 master-0 kubenswrapper[27820]: I0320 08:55:28.640294 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.640599 master-0 kubenswrapper[27820]: I0320 08:55:28.640571 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} err="failed to get container status \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" Mar 20 08:55:28.640599 master-0 kubenswrapper[27820]: I0320 08:55:28.640589 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.640921 master-0 kubenswrapper[27820]: I0320 08:55:28.640873 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} err="failed to get container status \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" Mar 20 08:55:28.640967 master-0 kubenswrapper[27820]: I0320 08:55:28.640917 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.641285 master-0 kubenswrapper[27820]: I0320 08:55:28.641239 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} err="failed to get container status \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" Mar 20 08:55:28.641285 master-0 kubenswrapper[27820]: I0320 08:55:28.641277 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.641580 master-0 kubenswrapper[27820]: I0320 08:55:28.641543 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} err="failed to get container status \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" Mar 20 08:55:28.641626 master-0 kubenswrapper[27820]: I0320 08:55:28.641582 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.641875 master-0 kubenswrapper[27820]: I0320 08:55:28.641841 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} err="failed to get container status \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" Mar 20 08:55:28.641875 master-0 kubenswrapper[27820]: I0320 08:55:28.641868 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.642121 master-0 kubenswrapper[27820]: I0320 08:55:28.642088 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} err="failed to get container status \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" Mar 20 08:55:28.642121 master-0 kubenswrapper[27820]: I0320 08:55:28.642115 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.642383 master-0 kubenswrapper[27820]: I0320 08:55:28.642355 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} err="failed to get container status \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" Mar 20 08:55:28.642383 master-0 kubenswrapper[27820]: I0320 08:55:28.642377 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.642613 master-0 kubenswrapper[27820]: I0320 08:55:28.642589 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} err="failed to get container status \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" Mar 20 08:55:28.642651 master-0 kubenswrapper[27820]: I0320 08:55:28.642612 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.646140 master-0 kubenswrapper[27820]: I0320 08:55:28.646081 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} err="failed to get container status \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" Mar 20 08:55:28.646192 master-0 kubenswrapper[27820]: I0320 08:55:28.646139 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.646502 master-0 kubenswrapper[27820]: I0320 08:55:28.646465 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} err="failed to get container status \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" Mar 20 08:55:28.646502 master-0 kubenswrapper[27820]: I0320 08:55:28.646496 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.646739 master-0 kubenswrapper[27820]: I0320 08:55:28.646706 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} err="failed to get container status \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" Mar 20 08:55:28.646777 master-0 kubenswrapper[27820]: I0320 08:55:28.646746 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.647168 master-0 kubenswrapper[27820]: I0320 08:55:28.647113 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} err="failed to get container status \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" Mar 20 08:55:28.647205 master-0 kubenswrapper[27820]: I0320 08:55:28.647164 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.647603 master-0 kubenswrapper[27820]: I0320 08:55:28.647554 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} err="failed to get container status \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" Mar 20 08:55:28.647656 master-0 kubenswrapper[27820]: I0320 08:55:28.647639 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.648157 master-0 kubenswrapper[27820]: I0320 08:55:28.648112 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} err="failed to get container status \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" Mar 20 08:55:28.648220 master-0 kubenswrapper[27820]: I0320 08:55:28.648192 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.648717 master-0 kubenswrapper[27820]: I0320 08:55:28.648672 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} err="failed to get container status \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" Mar 20 08:55:28.648784 master-0 kubenswrapper[27820]: I0320 08:55:28.648753 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.649181 master-0 kubenswrapper[27820]: I0320 08:55:28.649140 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} err="failed to get container status \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" Mar 20 08:55:28.649181 master-0 kubenswrapper[27820]: I0320 08:55:28.649173 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.649582 master-0 kubenswrapper[27820]: I0320 08:55:28.649537 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} err="failed to get container status \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" Mar 20 08:55:28.649654 master-0 kubenswrapper[27820]: I0320 08:55:28.649620 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.651521 master-0 kubenswrapper[27820]: I0320 08:55:28.651461 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} err="failed to get container status \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" Mar 20 08:55:28.651521 master-0 kubenswrapper[27820]: I0320 08:55:28.651508 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.651931 master-0 kubenswrapper[27820]: I0320 08:55:28.651885 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} err="failed to get container status \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" Mar 20 08:55:28.651931 master-0 kubenswrapper[27820]: I0320 08:55:28.651919 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.652250 master-0 kubenswrapper[27820]: I0320 08:55:28.652200 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} err="failed to get container status \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" Mar 20 08:55:28.652250 master-0 kubenswrapper[27820]: I0320 08:55:28.652241 27820 scope.go:117] "RemoveContainer" containerID="502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2" Mar 20 08:55:28.652613 master-0 kubenswrapper[27820]: I0320 08:55:28.652578 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2"} err="failed to get container status \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": rpc error: code = NotFound desc = could not find container \"502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2\": container with ID starting with 502f51f25cacc4bcbcfc17fee9557c02cfcf0ff1d7d5984915cc84a3bdb667c2 not found: ID does not exist" Mar 20 08:55:28.652613 master-0 kubenswrapper[27820]: I0320 08:55:28.652601 27820 scope.go:117] "RemoveContainer" containerID="58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812" Mar 20 08:55:28.654637 master-0 kubenswrapper[27820]: I0320 08:55:28.654594 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812"} err="failed to get container status \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": rpc error: code = NotFound desc = could not find container \"58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812\": container with ID starting with 58ba7614fb7173610e309e42b215d5decc16db8e5af63a0839976ce35bf26812 not found: ID does not exist" Mar 20 08:55:28.654637 master-0 kubenswrapper[27820]: I0320 08:55:28.654628 27820 scope.go:117] "RemoveContainer" containerID="83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc" Mar 20 08:55:28.654916 master-0 kubenswrapper[27820]: I0320 08:55:28.654868 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc"} err="failed to get container status \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": rpc error: code = NotFound desc = could not find container \"83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc\": container with ID starting with 83e3a486b966a6b512b103777ba8647b506d7b21e87c5d1b9cd40bec5a3158cc not found: ID does not exist" Mar 20 08:55:28.654916 master-0 kubenswrapper[27820]: I0320 08:55:28.654909 27820 scope.go:117] "RemoveContainer" containerID="5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b" Mar 20 08:55:28.655208 master-0 kubenswrapper[27820]: I0320 08:55:28.655162 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b"} err="failed to get container status \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": rpc error: code = NotFound desc = could not find container \"5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b\": container with ID starting with 5330547895ea24633c382d8f8c61029ed24f36aad27fb00241113a7b24d3af6b not found: ID does not exist" Mar 20 08:55:28.655208 master-0 kubenswrapper[27820]: I0320 08:55:28.655197 27820 scope.go:117] "RemoveContainer" containerID="be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b" Mar 20 08:55:28.655516 master-0 kubenswrapper[27820]: I0320 08:55:28.655479 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b"} err="failed to get container status \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": rpc error: code = NotFound desc = could not find container \"be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b\": container with ID starting with be105d224117f2641ce3bdaf87aa57eb28f1828c7fe4ba883b627a05a862070b not found: ID does not exist" Mar 20 08:55:28.655516 master-0 kubenswrapper[27820]: I0320 08:55:28.655506 27820 scope.go:117] "RemoveContainer" containerID="f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668" Mar 20 08:55:28.655770 master-0 kubenswrapper[27820]: I0320 08:55:28.655720 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668"} err="failed to get container status \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": rpc error: code = NotFound desc = could not find container \"f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668\": container with ID starting with f4a46cc2965446b5f5bb1a0f866ba522498df24080bae2df04571bb0e1e6c668 not found: ID does not exist" Mar 20 08:55:28.655770 master-0 kubenswrapper[27820]: I0320 08:55:28.655762 27820 scope.go:117] "RemoveContainer" containerID="29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2" Mar 20 08:55:28.656059 master-0 kubenswrapper[27820]: I0320 08:55:28.656023 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2"} err="failed to get container status \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": rpc error: code = NotFound desc = could not find container \"29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2\": container with ID starting with 29fb44762af9cdaf889203cfbaefadf72c5802140450e668a3478bdf1bfb1bd2 not found: ID does not exist" Mar 20 08:55:28.678785 master-0 kubenswrapper[27820]: I0320 08:55:28.678692 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs55p\" (UniqueName: \"kubernetes.io/projected/dc81c5cb-439d-4c3d-8a34-70e9035a846c-kube-api-access-vs55p\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.678785 master-0 kubenswrapper[27820]: I0320 08:55:28.678778 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679124 master-0 kubenswrapper[27820]: I0320 08:55:28.678872 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc81c5cb-439d-4c3d-8a34-70e9035a846c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679124 master-0 kubenswrapper[27820]: I0320 08:55:28.678917 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679124 master-0 kubenswrapper[27820]: I0320 08:55:28.678971 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/dc81c5cb-439d-4c3d-8a34-70e9035a846c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679124 master-0 kubenswrapper[27820]: I0320 08:55:28.679006 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-config-volume\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679124 master-0 kubenswrapper[27820]: I0320 08:55:28.679041 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679124 master-0 kubenswrapper[27820]: I0320 08:55:28.679078 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc81c5cb-439d-4c3d-8a34-70e9035a846c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679327 master-0 kubenswrapper[27820]: I0320 08:55:28.679136 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-web-config\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.679988 master-0 kubenswrapper[27820]: I0320 08:55:28.679952 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/dc81c5cb-439d-4c3d-8a34-70e9035a846c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.680367 master-0 kubenswrapper[27820]: I0320 08:55:28.680307 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/dc81c5cb-439d-4c3d-8a34-70e9035a846c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.680471 master-0 kubenswrapper[27820]: I0320 08:55:28.680412 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.680564 master-0 kubenswrapper[27820]: I0320 08:55:28.680498 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc81c5cb-439d-4c3d-8a34-70e9035a846c-config-out\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.680643 master-0 kubenswrapper[27820]: I0320 08:55:28.680584 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc81c5cb-439d-4c3d-8a34-70e9035a846c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.681619 master-0 kubenswrapper[27820]: I0320 08:55:28.681592 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dc81c5cb-439d-4c3d-8a34-70e9035a846c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.682549 master-0 kubenswrapper[27820]: I0320 08:55:28.682526 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.682824 master-0 kubenswrapper[27820]: I0320 08:55:28.682775 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-config-volume\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.682948 master-0 kubenswrapper[27820]: I0320 08:55:28.682919 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.683227 master-0 kubenswrapper[27820]: I0320 08:55:28.683190 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-web-config\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.683450 master-0 kubenswrapper[27820]: I0320 08:55:28.683415 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.683486 master-0 kubenswrapper[27820]: I0320 08:55:28.683456 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/dc81c5cb-439d-4c3d-8a34-70e9035a846c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.685139 master-0 kubenswrapper[27820]: I0320 08:55:28.685099 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/dc81c5cb-439d-4c3d-8a34-70e9035a846c-config-out\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.685825 master-0 kubenswrapper[27820]: I0320 08:55:28.685757 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/dc81c5cb-439d-4c3d-8a34-70e9035a846c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.696912 master-0 kubenswrapper[27820]: I0320 08:55:28.696877 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs55p\" (UniqueName: \"kubernetes.io/projected/dc81c5cb-439d-4c3d-8a34-70e9035a846c-kube-api-access-vs55p\") pod \"alertmanager-main-0\" (UID: \"dc81c5cb-439d-4c3d-8a34-70e9035a846c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:28.882536 master-0 kubenswrapper[27820]: I0320 08:55:28.882330 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 20 08:55:29.435167 master-0 kubenswrapper[27820]: W0320 08:55:29.435080 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc81c5cb_439d_4c3d_8a34_70e9035a846c.slice/crio-78cd0966305b4d093973f02f7e304148b27b44cf46099554316ea2b3df513799 WatchSource:0}: Error finding container 78cd0966305b4d093973f02f7e304148b27b44cf46099554316ea2b3df513799: Status 404 returned error can't find the container with id 78cd0966305b4d093973f02f7e304148b27b44cf46099554316ea2b3df513799 Mar 20 08:55:29.439497 master-0 kubenswrapper[27820]: I0320 08:55:29.439443 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 20 08:55:30.085148 master-0 kubenswrapper[27820]: I0320 08:55:30.084977 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e6b860-8317-4fbe-9f43-41b0b707bc1b" path="/var/lib/kubelet/pods/a8e6b860-8317-4fbe-9f43-41b0b707bc1b/volumes" Mar 20 08:55:30.450911 master-0 kubenswrapper[27820]: I0320 08:55:30.450823 27820 generic.go:334] "Generic (PLEG): container finished" podID="dc81c5cb-439d-4c3d-8a34-70e9035a846c" containerID="49830d9cdd29022ae7356dd41fdf7518bb6abb0b4abbac0a0f58cfd703356060" exitCode=0 Mar 20 08:55:30.450911 master-0 kubenswrapper[27820]: I0320 08:55:30.450896 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerDied","Data":"49830d9cdd29022ae7356dd41fdf7518bb6abb0b4abbac0a0f58cfd703356060"} Mar 20 08:55:30.451232 master-0 kubenswrapper[27820]: I0320 08:55:30.450938 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"78cd0966305b4d093973f02f7e304148b27b44cf46099554316ea2b3df513799"} Mar 20 08:55:31.428093 master-0 kubenswrapper[27820]: I0320 08:55:31.428002 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-6f98bb7c67-q7pqm" podUID="7fae9393-1ca8-4304-92ff-78f8f2d85288" containerName="console" containerID="cri-o://380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173" gracePeriod=15 Mar 20 08:55:31.473456 master-0 kubenswrapper[27820]: I0320 08:55:31.473355 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"380c5d87ebc499a27dfd994b5b296956530831733e9e0d72e6060d7042513b57"} Mar 20 08:55:31.473456 master-0 kubenswrapper[27820]: I0320 08:55:31.473415 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"8efe6e2acd4b81c470420414c9cf78dac63c180fc0d3bdc78b5aabdb9a8adbbc"} Mar 20 08:55:31.473456 master-0 kubenswrapper[27820]: I0320 08:55:31.473428 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"1a241ffc339a34e9a80faedc16e5ba42edf37a078facf13378b4c74eb2a796e9"} Mar 20 08:55:31.474086 master-0 kubenswrapper[27820]: I0320 08:55:31.473504 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"977fc102701b1ec680b036065a9eaad33b86662988444e1266af82ed4abc7fc6"} Mar 20 08:55:31.474086 master-0 kubenswrapper[27820]: I0320 08:55:31.473519 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"805e9b603cfd13fcd8f3f4cd6764e949c1b3d76eb3c8f717a4cabb5d9e409c53"} Mar 20 08:55:31.474086 master-0 kubenswrapper[27820]: I0320 08:55:31.473529 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"dc81c5cb-439d-4c3d-8a34-70e9035a846c","Type":"ContainerStarted","Data":"bdd1467cb42aea45c92af350b854dd4b7ec8d148ef8d8a82594572ccbc7fb679"} Mar 20 08:55:31.529153 master-0 kubenswrapper[27820]: I0320 08:55:31.528925 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.528876947 podStartE2EDuration="3.528876947s" podCreationTimestamp="2026-03-20 08:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:55:31.51829347 +0000 UTC m=+341.613502654" watchObservedRunningTime="2026-03-20 08:55:31.528876947 +0000 UTC m=+341.624086121" Mar 20 08:55:31.913528 master-0 kubenswrapper[27820]: I0320 08:55:31.913490 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f98bb7c67-q7pqm_7fae9393-1ca8-4304-92ff-78f8f2d85288/console/0.log" Mar 20 08:55:31.913768 master-0 kubenswrapper[27820]: I0320 08:55:31.913566 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:55:31.933914 master-0 kubenswrapper[27820]: I0320 08:55:31.933837 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-serving-cert\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.933914 master-0 kubenswrapper[27820]: I0320 08:55:31.933925 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-trusted-ca-bundle\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.934311 master-0 kubenswrapper[27820]: I0320 08:55:31.933961 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-oauth-config\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.934311 master-0 kubenswrapper[27820]: I0320 08:55:31.934031 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbq56\" (UniqueName: \"kubernetes.io/projected/7fae9393-1ca8-4304-92ff-78f8f2d85288-kube-api-access-bbq56\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.934311 master-0 kubenswrapper[27820]: I0320 08:55:31.934090 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-oauth-serving-cert\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.934311 master-0 kubenswrapper[27820]: I0320 08:55:31.934124 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-config\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.934311 master-0 kubenswrapper[27820]: I0320 08:55:31.934162 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-service-ca\") pod \"7fae9393-1ca8-4304-92ff-78f8f2d85288\" (UID: \"7fae9393-1ca8-4304-92ff-78f8f2d85288\") " Mar 20 08:55:31.935122 master-0 kubenswrapper[27820]: I0320 08:55:31.935081 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-service-ca" (OuterVolumeSpecName: "service-ca") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:31.939704 master-0 kubenswrapper[27820]: I0320 08:55:31.939625 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:31.940188 master-0 kubenswrapper[27820]: I0320 08:55:31.940152 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:31.943860 master-0 kubenswrapper[27820]: I0320 08:55:31.943808 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:31.945014 master-0 kubenswrapper[27820]: I0320 08:55:31.944966 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:31.945440 master-0 kubenswrapper[27820]: I0320 08:55:31.945402 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-config" (OuterVolumeSpecName: "console-config") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:31.951320 master-0 kubenswrapper[27820]: I0320 08:55:31.951246 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fae9393-1ca8-4304-92ff-78f8f2d85288-kube-api-access-bbq56" (OuterVolumeSpecName: "kube-api-access-bbq56") pod "7fae9393-1ca8-4304-92ff-78f8f2d85288" (UID: "7fae9393-1ca8-4304-92ff-78f8f2d85288"). InnerVolumeSpecName "kube-api-access-bbq56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035694 27820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035745 27820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035755 27820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035764 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbq56\" (UniqueName: \"kubernetes.io/projected/7fae9393-1ca8-4304-92ff-78f8f2d85288-kube-api-access-bbq56\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035774 27820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035787 27820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-console-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.035779 master-0 kubenswrapper[27820]: I0320 08:55:32.035800 27820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/7fae9393-1ca8-4304-92ff-78f8f2d85288-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.239936 master-0 kubenswrapper[27820]: I0320 08:55:32.239882 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:55:32.240219 master-0 kubenswrapper[27820]: I0320 08:55:32.240192 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="prometheus" containerID="cri-o://4174f4881783f2e6f439d3784afddbc483a2f56e5c632fe86b539c94673d3c75" gracePeriod=600 Mar 20 08:55:32.240308 master-0 kubenswrapper[27820]: I0320 08:55:32.240223 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-web" containerID="cri-o://4fa647f70c741545ab9c6fd11ee6ef71d990fb96ac5efa985b078bf6f67ed15c" gracePeriod=600 Mar 20 08:55:32.240377 master-0 kubenswrapper[27820]: I0320 08:55:32.240238 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy" containerID="cri-o://f496e0a692176dee4f6a3d62bbbd64632de403723365bbbf4805097f09605bb9" gracePeriod=600 Mar 20 08:55:32.240377 master-0 kubenswrapper[27820]: I0320 08:55:32.240321 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="config-reloader" containerID="cri-o://ffa0db3ca8b7a30386de739861dbfea7ad49b219fdc1904880807bc56fec6ea7" gracePeriod=600 Mar 20 08:55:32.240467 master-0 kubenswrapper[27820]: I0320 08:55:32.240371 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="thanos-sidecar" containerID="cri-o://29d61d36907cd792218d99015015c61a483a9531d4d6a5432ef0ef7344f492ab" gracePeriod=600 Mar 20 08:55:32.240467 master-0 kubenswrapper[27820]: I0320 08:55:32.240457 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-thanos" containerID="cri-o://d8dd2fefdf7a08ead086036cbc63e7e1658904b7bad0550a0696d3daae3feca7" gracePeriod=600 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511436 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="d8dd2fefdf7a08ead086036cbc63e7e1658904b7bad0550a0696d3daae3feca7" exitCode=0 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511470 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="f496e0a692176dee4f6a3d62bbbd64632de403723365bbbf4805097f09605bb9" exitCode=0 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511478 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="4fa647f70c741545ab9c6fd11ee6ef71d990fb96ac5efa985b078bf6f67ed15c" exitCode=0 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511484 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="29d61d36907cd792218d99015015c61a483a9531d4d6a5432ef0ef7344f492ab" exitCode=0 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511491 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="ffa0db3ca8b7a30386de739861dbfea7ad49b219fdc1904880807bc56fec6ea7" exitCode=0 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511500 27820 generic.go:334] "Generic (PLEG): container finished" podID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerID="4174f4881783f2e6f439d3784afddbc483a2f56e5c632fe86b539c94673d3c75" exitCode=0 Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511517 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"d8dd2fefdf7a08ead086036cbc63e7e1658904b7bad0550a0696d3daae3feca7"} Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511598 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"f496e0a692176dee4f6a3d62bbbd64632de403723365bbbf4805097f09605bb9"} Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511620 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"4fa647f70c741545ab9c6fd11ee6ef71d990fb96ac5efa985b078bf6f67ed15c"} Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511639 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"29d61d36907cd792218d99015015c61a483a9531d4d6a5432ef0ef7344f492ab"} Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511660 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"ffa0db3ca8b7a30386de739861dbfea7ad49b219fdc1904880807bc56fec6ea7"} Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.511678 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"4174f4881783f2e6f439d3784afddbc483a2f56e5c632fe86b539c94673d3c75"} Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.513207 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-6f98bb7c67-q7pqm_7fae9393-1ca8-4304-92ff-78f8f2d85288/console/0.log" Mar 20 08:55:32.514333 master-0 kubenswrapper[27820]: I0320 08:55:32.513260 27820 generic.go:334] "Generic (PLEG): container finished" podID="7fae9393-1ca8-4304-92ff-78f8f2d85288" containerID="380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173" exitCode=2 Mar 20 08:55:32.515187 master-0 kubenswrapper[27820]: I0320 08:55:32.514616 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-6f98bb7c67-q7pqm" Mar 20 08:55:32.515251 master-0 kubenswrapper[27820]: I0320 08:55:32.515220 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f98bb7c67-q7pqm" event={"ID":"7fae9393-1ca8-4304-92ff-78f8f2d85288","Type":"ContainerDied","Data":"380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173"} Mar 20 08:55:32.515310 master-0 kubenswrapper[27820]: I0320 08:55:32.515253 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-6f98bb7c67-q7pqm" event={"ID":"7fae9393-1ca8-4304-92ff-78f8f2d85288","Type":"ContainerDied","Data":"b0d0773eac6ad9fca11ecf90a37a5f0d38779f748fb79fd21a61225fd61b7e3d"} Mar 20 08:55:32.515345 master-0 kubenswrapper[27820]: I0320 08:55:32.515306 27820 scope.go:117] "RemoveContainer" containerID="380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173" Mar 20 08:55:32.553579 master-0 kubenswrapper[27820]: I0320 08:55:32.553519 27820 scope.go:117] "RemoveContainer" containerID="380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173" Mar 20 08:55:32.555397 master-0 kubenswrapper[27820]: E0320 08:55:32.555338 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173\": container with ID starting with 380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173 not found: ID does not exist" containerID="380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173" Mar 20 08:55:32.555464 master-0 kubenswrapper[27820]: I0320 08:55:32.555415 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173"} err="failed to get container status \"380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173\": rpc error: code = NotFound desc = could not find container \"380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173\": container with ID starting with 380ce3a458a1b1aadc2317b6b842c3e3fb75748e08cb253b219fd4a94df2e173 not found: ID does not exist" Mar 20 08:55:32.564367 master-0 kubenswrapper[27820]: I0320 08:55:32.564320 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-6f98bb7c67-q7pqm"] Mar 20 08:55:32.570720 master-0 kubenswrapper[27820]: I0320 08:55:32.570663 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-6f98bb7c67-q7pqm"] Mar 20 08:55:32.714207 master-0 kubenswrapper[27820]: I0320 08:55:32.714158 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:32.759110 master-0 kubenswrapper[27820]: I0320 08:55:32.758995 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759110 master-0 kubenswrapper[27820]: I0320 08:55:32.759082 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-config\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759110 master-0 kubenswrapper[27820]: I0320 08:55:32.759115 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-grpc-tls\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759136 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82ww5\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-kube-api-access-82ww5\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759164 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-trusted-ca-bundle\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759201 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-tls\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759256 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-tls-assets\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759294 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-kube-rbac-proxy\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759323 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-config-out\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759350 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759378 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-db\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759400 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-metrics-client-ca\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759486 master-0 kubenswrapper[27820]: I0320 08:55:32.759420 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-thanos-prometheus-http-client-file\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759761 master-0 kubenswrapper[27820]: I0320 08:55:32.759490 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-web-config\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759761 master-0 kubenswrapper[27820]: I0320 08:55:32.759523 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-serving-certs-ca-bundle\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759761 master-0 kubenswrapper[27820]: I0320 08:55:32.759540 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759761 master-0 kubenswrapper[27820]: I0320 08:55:32.759561 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-kubelet-serving-ca-bundle\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.759761 master-0 kubenswrapper[27820]: I0320 08:55:32.759580 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-metrics-client-certs\") pod \"c9917efe-9886-4199-b78f-cb3ed320bff7\" (UID: \"c9917efe-9886-4199-b78f-cb3ed320bff7\") " Mar 20 08:55:32.760210 master-0 kubenswrapper[27820]: I0320 08:55:32.760046 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:32.762210 master-0 kubenswrapper[27820]: I0320 08:55:32.762166 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:32.763371 master-0 kubenswrapper[27820]: I0320 08:55:32.763318 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:32.763468 master-0 kubenswrapper[27820]: I0320 08:55:32.763443 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-config" (OuterVolumeSpecName: "config") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.765447 master-0 kubenswrapper[27820]: I0320 08:55:32.763515 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:32.765447 master-0 kubenswrapper[27820]: I0320 08:55:32.763661 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.765447 master-0 kubenswrapper[27820]: I0320 08:55:32.764072 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:32.765602 master-0 kubenswrapper[27820]: I0320 08:55:32.765549 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.770512 master-0 kubenswrapper[27820]: I0320 08:55:32.770470 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-kube-api-access-82ww5" (OuterVolumeSpecName: "kube-api-access-82ww5") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "kube-api-access-82ww5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:32.774980 master-0 kubenswrapper[27820]: I0320 08:55:32.773619 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.779067 master-0 kubenswrapper[27820]: I0320 08:55:32.776447 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:32.781791 master-0 kubenswrapper[27820]: I0320 08:55:32.779905 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-config-out" (OuterVolumeSpecName: "config-out") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:32.781791 master-0 kubenswrapper[27820]: I0320 08:55:32.780064 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.781791 master-0 kubenswrapper[27820]: I0320 08:55:32.780104 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.781791 master-0 kubenswrapper[27820]: I0320 08:55:32.780317 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:32.782870 master-0 kubenswrapper[27820]: I0320 08:55:32.782635 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.782870 master-0 kubenswrapper[27820]: I0320 08:55:32.782663 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.816886 master-0 kubenswrapper[27820]: I0320 08:55:32.816822 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-web-config" (OuterVolumeSpecName: "web-config") pod "c9917efe-9886-4199-b78f-cb3ed320bff7" (UID: "c9917efe-9886-4199-b78f-cb3ed320bff7"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861111 27820 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861151 27820 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861162 27820 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-config-out\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861174 27820 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861183 27820 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861194 27820 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861203 27820 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861212 27820 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-web-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861220 27820 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861232 27820 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861244 27820 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861256 27820 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861322 27820 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861332 27820 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861340 27820 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861349 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82ww5\" (UniqueName: \"kubernetes.io/projected/c9917efe-9886-4199-b78f-cb3ed320bff7-kube-api-access-82ww5\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861359 27820 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c9917efe-9886-4199-b78f-cb3ed320bff7-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:32.861390 master-0 kubenswrapper[27820]: I0320 08:55:32.861367 27820 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/c9917efe-9886-4199-b78f-cb3ed320bff7-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:55:33.530908 master-0 kubenswrapper[27820]: I0320 08:55:33.530837 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"c9917efe-9886-4199-b78f-cb3ed320bff7","Type":"ContainerDied","Data":"cd1c889aca1cf909017432c57d82a5881c0033f7ca72ef810433e0f8f9b58008"} Mar 20 08:55:33.532442 master-0 kubenswrapper[27820]: I0320 08:55:33.530945 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.532591 master-0 kubenswrapper[27820]: I0320 08:55:33.532428 27820 scope.go:117] "RemoveContainer" containerID="d8dd2fefdf7a08ead086036cbc63e7e1658904b7bad0550a0696d3daae3feca7" Mar 20 08:55:33.552573 master-0 kubenswrapper[27820]: I0320 08:55:33.552533 27820 scope.go:117] "RemoveContainer" containerID="f496e0a692176dee4f6a3d62bbbd64632de403723365bbbf4805097f09605bb9" Mar 20 08:55:33.579860 master-0 kubenswrapper[27820]: I0320 08:55:33.579811 27820 scope.go:117] "RemoveContainer" containerID="4fa647f70c741545ab9c6fd11ee6ef71d990fb96ac5efa985b078bf6f67ed15c" Mar 20 08:55:33.608070 master-0 kubenswrapper[27820]: I0320 08:55:33.607717 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:55:33.609651 master-0 kubenswrapper[27820]: I0320 08:55:33.609472 27820 scope.go:117] "RemoveContainer" containerID="29d61d36907cd792218d99015015c61a483a9531d4d6a5432ef0ef7344f492ab" Mar 20 08:55:33.612807 master-0 kubenswrapper[27820]: I0320 08:55:33.612762 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:55:33.631583 master-0 kubenswrapper[27820]: I0320 08:55:33.631544 27820 scope.go:117] "RemoveContainer" containerID="ffa0db3ca8b7a30386de739861dbfea7ad49b219fdc1904880807bc56fec6ea7" Mar 20 08:55:33.636623 master-0 kubenswrapper[27820]: I0320 08:55:33.636562 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: E0320 08:55:33.636875 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="config-reloader" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: I0320 08:55:33.636899 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="config-reloader" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: E0320 08:55:33.636916 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="init-config-reloader" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: I0320 08:55:33.636922 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="init-config-reloader" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: E0320 08:55:33.636939 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="thanos-sidecar" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: I0320 08:55:33.636948 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="thanos-sidecar" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: E0320 08:55:33.636968 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-thanos" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: I0320 08:55:33.636975 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-thanos" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: E0320 08:55:33.636986 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fae9393-1ca8-4304-92ff-78f8f2d85288" containerName="console" Mar 20 08:55:33.636968 master-0 kubenswrapper[27820]: I0320 08:55:33.636993 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fae9393-1ca8-4304-92ff-78f8f2d85288" containerName="console" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: E0320 08:55:33.637007 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-web" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637014 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-web" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: E0320 08:55:33.637023 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="prometheus" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637029 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="prometheus" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: E0320 08:55:33.637043 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637049 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637152 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fae9393-1ca8-4304-92ff-78f8f2d85288" containerName="console" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637172 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="thanos-sidecar" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637189 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637205 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-thanos" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637215 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="config-reloader" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637225 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="kube-rbac-proxy-web" Mar 20 08:55:33.637592 master-0 kubenswrapper[27820]: I0320 08:55:33.637234 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" containerName="prometheus" Mar 20 08:55:33.639173 master-0 kubenswrapper[27820]: I0320 08:55:33.639094 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.642758 master-0 kubenswrapper[27820]: I0320 08:55:33.642433 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 20 08:55:33.642862 master-0 kubenswrapper[27820]: I0320 08:55:33.642817 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.647893 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-chhohvmqogrio" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.648071 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.648412 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.648654 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.653436 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.654184 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.654283 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.655272 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.656449 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.657423 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.657449 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 20 08:55:33.692348 master-0 kubenswrapper[27820]: I0320 08:55:33.659095 27820 scope.go:117] "RemoveContainer" containerID="4174f4881783f2e6f439d3784afddbc483a2f56e5c632fe86b539c94673d3c75" Mar 20 08:55:33.706107 master-0 kubenswrapper[27820]: I0320 08:55:33.706074 27820 scope.go:117] "RemoveContainer" containerID="43e28d2d559547a86904f6babdf3bc2abb1ff2664a471ccbe739a35a3a6ac383" Mar 20 08:55:33.793735 master-0 kubenswrapper[27820]: I0320 08:55:33.793583 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-web-config\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.793735 master-0 kubenswrapper[27820]: I0320 08:55:33.793652 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.793735 master-0 kubenswrapper[27820]: I0320 08:55:33.793684 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.793735 master-0 kubenswrapper[27820]: I0320 08:55:33.793705 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj96j\" (UniqueName: \"kubernetes.io/projected/8c22afd8-ac59-47f2-83da-5efa9eea747a-kube-api-access-bj96j\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.793752 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.793774 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.793793 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.793838 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.793910 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c22afd8-ac59-47f2-83da-5efa9eea747a-config-out\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.793929 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.794060 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794085 master-0 kubenswrapper[27820]: I0320 08:55:33.794091 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-config\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794493 master-0 kubenswrapper[27820]: I0320 08:55:33.794110 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c22afd8-ac59-47f2-83da-5efa9eea747a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794493 master-0 kubenswrapper[27820]: I0320 08:55:33.794127 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794493 master-0 kubenswrapper[27820]: I0320 08:55:33.794147 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794493 master-0 kubenswrapper[27820]: I0320 08:55:33.794165 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794493 master-0 kubenswrapper[27820]: I0320 08:55:33.794302 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.794493 master-0 kubenswrapper[27820]: I0320 08:55:33.794377 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.896312 master-0 kubenswrapper[27820]: I0320 08:55:33.896196 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-web-config\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.896632 master-0 kubenswrapper[27820]: I0320 08:55:33.896343 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.896632 master-0 kubenswrapper[27820]: I0320 08:55:33.896376 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.896765 master-0 kubenswrapper[27820]: I0320 08:55:33.896644 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj96j\" (UniqueName: \"kubernetes.io/projected/8c22afd8-ac59-47f2-83da-5efa9eea747a-kube-api-access-bj96j\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.897218 master-0 kubenswrapper[27820]: I0320 08:55:33.897155 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.897218 master-0 kubenswrapper[27820]: I0320 08:55:33.897206 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.897429 master-0 kubenswrapper[27820]: I0320 08:55:33.897247 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.897559 master-0 kubenswrapper[27820]: I0320 08:55:33.897510 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.898228 master-0 kubenswrapper[27820]: I0320 08:55:33.898140 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c22afd8-ac59-47f2-83da-5efa9eea747a-config-out\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.898426 master-0 kubenswrapper[27820]: I0320 08:55:33.898375 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.898562 master-0 kubenswrapper[27820]: I0320 08:55:33.898503 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.898884 master-0 kubenswrapper[27820]: I0320 08:55:33.898800 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899046 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899174 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-config\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899242 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c22afd8-ac59-47f2-83da-5efa9eea747a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899321 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899401 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899450 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899486 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.899565 master-0 kubenswrapper[27820]: I0320 08:55:33.899532 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.901230 master-0 kubenswrapper[27820]: I0320 08:55:33.901125 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.904143 master-0 kubenswrapper[27820]: I0320 08:55:33.902434 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8c22afd8-ac59-47f2-83da-5efa9eea747a-config-out\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.904143 master-0 kubenswrapper[27820]: I0320 08:55:33.902947 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.904143 master-0 kubenswrapper[27820]: I0320 08:55:33.903935 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-web-config\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.904673 master-0 kubenswrapper[27820]: I0320 08:55:33.904148 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.906287 master-0 kubenswrapper[27820]: I0320 08:55:33.905718 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.906952 master-0 kubenswrapper[27820]: I0320 08:55:33.906308 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.909426 master-0 kubenswrapper[27820]: I0320 08:55:33.907239 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-config\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.909426 master-0 kubenswrapper[27820]: I0320 08:55:33.907512 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.909426 master-0 kubenswrapper[27820]: I0320 08:55:33.907533 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.909426 master-0 kubenswrapper[27820]: I0320 08:55:33.908897 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.909866 master-0 kubenswrapper[27820]: I0320 08:55:33.909634 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8c22afd8-ac59-47f2-83da-5efa9eea747a-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.914925 master-0 kubenswrapper[27820]: I0320 08:55:33.914862 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/8c22afd8-ac59-47f2-83da-5efa9eea747a-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.915218 master-0 kubenswrapper[27820]: I0320 08:55:33.915162 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.916105 master-0 kubenswrapper[27820]: I0320 08:55:33.916059 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/8c22afd8-ac59-47f2-83da-5efa9eea747a-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:33.929152 master-0 kubenswrapper[27820]: I0320 08:55:33.929081 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj96j\" (UniqueName: \"kubernetes.io/projected/8c22afd8-ac59-47f2-83da-5efa9eea747a-kube-api-access-bj96j\") pod \"prometheus-k8s-0\" (UID: \"8c22afd8-ac59-47f2-83da-5efa9eea747a\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:34.011211 master-0 kubenswrapper[27820]: I0320 08:55:34.011109 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:55:34.090870 master-0 kubenswrapper[27820]: I0320 08:55:34.090718 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fae9393-1ca8-4304-92ff-78f8f2d85288" path="/var/lib/kubelet/pods/7fae9393-1ca8-4304-92ff-78f8f2d85288/volumes" Mar 20 08:55:34.092016 master-0 kubenswrapper[27820]: I0320 08:55:34.091967 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9917efe-9886-4199-b78f-cb3ed320bff7" path="/var/lib/kubelet/pods/c9917efe-9886-4199-b78f-cb3ed320bff7/volumes" Mar 20 08:55:34.522574 master-0 kubenswrapper[27820]: W0320 08:55:34.522494 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c22afd8_ac59_47f2_83da_5efa9eea747a.slice/crio-17939d742dd8e4fbd36beffbb58298e2d6ca0532b37ba0e76d6757c40c7ab729 WatchSource:0}: Error finding container 17939d742dd8e4fbd36beffbb58298e2d6ca0532b37ba0e76d6757c40c7ab729: Status 404 returned error can't find the container with id 17939d742dd8e4fbd36beffbb58298e2d6ca0532b37ba0e76d6757c40c7ab729 Mar 20 08:55:34.527411 master-0 kubenswrapper[27820]: I0320 08:55:34.527370 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 20 08:55:34.550690 master-0 kubenswrapper[27820]: I0320 08:55:34.550622 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"17939d742dd8e4fbd36beffbb58298e2d6ca0532b37ba0e76d6757c40c7ab729"} Mar 20 08:55:35.561190 master-0 kubenswrapper[27820]: I0320 08:55:35.561123 27820 generic.go:334] "Generic (PLEG): container finished" podID="8c22afd8-ac59-47f2-83da-5efa9eea747a" containerID="fc4e5f0669d5e80f3919a6111ab9c1200b67ea8d986f991181cc482066212d61" exitCode=0 Mar 20 08:55:35.561889 master-0 kubenswrapper[27820]: I0320 08:55:35.561186 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerDied","Data":"fc4e5f0669d5e80f3919a6111ab9c1200b67ea8d986f991181cc482066212d61"} Mar 20 08:55:36.574170 master-0 kubenswrapper[27820]: I0320 08:55:36.574122 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"20a0bdb784e9ecad30cb412d4507b70e97a8c088d96ce761ea65ad27279cebac"} Mar 20 08:55:36.574170 master-0 kubenswrapper[27820]: I0320 08:55:36.574168 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"65e1e066ba4c94c5ff66c87e8b7182d2db47bcbcce02edf5da4c73610225c855"} Mar 20 08:55:36.574725 master-0 kubenswrapper[27820]: I0320 08:55:36.574178 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"2991cee644f9cce1ff4d270e8e7145ff3b5850be3da06057030accca7c4e036f"} Mar 20 08:55:36.574725 master-0 kubenswrapper[27820]: I0320 08:55:36.574191 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"1146f9d228097754ee05c20194a0dbb681d8189f69f5bf4243d6346c8a8a9b5e"} Mar 20 08:55:36.574725 master-0 kubenswrapper[27820]: I0320 08:55:36.574199 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"480406e2377ab0fa720480afe4bdb6ea741de223030a791e1bb33c9a1e329972"} Mar 20 08:55:37.584774 master-0 kubenswrapper[27820]: I0320 08:55:37.584717 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"8c22afd8-ac59-47f2-83da-5efa9eea747a","Type":"ContainerStarted","Data":"2d4552adf51bc5228a7db608daee49fe804da9d06969928c7e377d5f7bdf83c9"} Mar 20 08:55:37.644229 master-0 kubenswrapper[27820]: I0320 08:55:37.644127 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=4.644106423 podStartE2EDuration="4.644106423s" podCreationTimestamp="2026-03-20 08:55:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:55:37.638367817 +0000 UTC m=+347.733576971" watchObservedRunningTime="2026-03-20 08:55:37.644106423 +0000 UTC m=+347.739315577" Mar 20 08:55:39.011505 master-0 kubenswrapper[27820]: I0320 08:55:39.011373 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:56:03.018810 master-0 kubenswrapper[27820]: I0320 08:56:03.018738 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-89bcb965d-7zclw"] Mar 20 08:56:03.022101 master-0 kubenswrapper[27820]: I0320 08:56:03.022052 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.040327 master-0 kubenswrapper[27820]: I0320 08:56:03.040250 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-89bcb965d-7zclw"] Mar 20 08:56:03.090517 master-0 kubenswrapper[27820]: I0320 08:56:03.090452 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-service-ca\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.090770 master-0 kubenswrapper[27820]: I0320 08:56:03.090535 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-trusted-ca-bundle\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.090770 master-0 kubenswrapper[27820]: I0320 08:56:03.090678 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-config\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.090851 master-0 kubenswrapper[27820]: I0320 08:56:03.090816 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-oauth-config\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.091037 master-0 kubenswrapper[27820]: I0320 08:56:03.090892 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zck22\" (UniqueName: \"kubernetes.io/projected/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-kube-api-access-zck22\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.091037 master-0 kubenswrapper[27820]: I0320 08:56:03.091024 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-oauth-serving-cert\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.091121 master-0 kubenswrapper[27820]: I0320 08:56:03.091075 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-serving-cert\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.192825 master-0 kubenswrapper[27820]: I0320 08:56:03.192750 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-service-ca\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.193078 master-0 kubenswrapper[27820]: I0320 08:56:03.192850 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-trusted-ca-bundle\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.193078 master-0 kubenswrapper[27820]: I0320 08:56:03.192905 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-config\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.193078 master-0 kubenswrapper[27820]: I0320 08:56:03.192951 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-oauth-config\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.193078 master-0 kubenswrapper[27820]: I0320 08:56:03.192986 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zck22\" (UniqueName: \"kubernetes.io/projected/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-kube-api-access-zck22\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.193078 master-0 kubenswrapper[27820]: I0320 08:56:03.193049 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-oauth-serving-cert\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.193078 master-0 kubenswrapper[27820]: I0320 08:56:03.193078 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-serving-cert\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.194250 master-0 kubenswrapper[27820]: I0320 08:56:03.194207 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-service-ca\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.194337 master-0 kubenswrapper[27820]: I0320 08:56:03.194317 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-config\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.195369 master-0 kubenswrapper[27820]: I0320 08:56:03.195339 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-oauth-serving-cert\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.195441 master-0 kubenswrapper[27820]: I0320 08:56:03.195387 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-trusted-ca-bundle\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.196291 master-0 kubenswrapper[27820]: I0320 08:56:03.196235 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-oauth-config\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.197570 master-0 kubenswrapper[27820]: I0320 08:56:03.197530 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-serving-cert\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.211368 master-0 kubenswrapper[27820]: I0320 08:56:03.211314 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zck22\" (UniqueName: \"kubernetes.io/projected/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-kube-api-access-zck22\") pod \"console-89bcb965d-7zclw\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.338942 master-0 kubenswrapper[27820]: I0320 08:56:03.338745 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:03.788106 master-0 kubenswrapper[27820]: I0320 08:56:03.788024 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-89bcb965d-7zclw"] Mar 20 08:56:03.794809 master-0 kubenswrapper[27820]: I0320 08:56:03.794760 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-89bcb965d-7zclw" event={"ID":"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20","Type":"ContainerStarted","Data":"c2e65e3c29f8e4e51ead0af442c2d921aadabdecfbc2a40546f8588a6884831f"} Mar 20 08:56:03.910526 master-0 kubenswrapper[27820]: I0320 08:56:03.910431 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 20 08:56:03.912090 master-0 kubenswrapper[27820]: I0320 08:56:03.912041 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:03.913669 master-0 kubenswrapper[27820]: I0320 08:56:03.913427 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-r4xv4" Mar 20 08:56:03.914002 master-0 kubenswrapper[27820]: I0320 08:56:03.913958 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 08:56:03.920107 master-0 kubenswrapper[27820]: I0320 08:56:03.920064 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 20 08:56:04.011470 master-0 kubenswrapper[27820]: I0320 08:56:04.011360 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7c15c64-0760-4f92-93f4-294b46732974-kube-api-access\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.011470 master-0 kubenswrapper[27820]: I0320 08:56:04.011453 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-var-lock\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.011739 master-0 kubenswrapper[27820]: I0320 08:56:04.011579 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.113499 master-0 kubenswrapper[27820]: I0320 08:56:04.113392 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7c15c64-0760-4f92-93f4-294b46732974-kube-api-access\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.113499 master-0 kubenswrapper[27820]: I0320 08:56:04.113500 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-var-lock\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.114034 master-0 kubenswrapper[27820]: I0320 08:56:04.113642 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-var-lock\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.114034 master-0 kubenswrapper[27820]: I0320 08:56:04.113776 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.114034 master-0 kubenswrapper[27820]: I0320 08:56:04.113963 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.137219 master-0 kubenswrapper[27820]: I0320 08:56:04.137145 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7c15c64-0760-4f92-93f4-294b46732974-kube-api-access\") pod \"installer-4-master-0\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.250005 master-0 kubenswrapper[27820]: I0320 08:56:04.249885 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:04.661830 master-0 kubenswrapper[27820]: I0320 08:56:04.661749 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 20 08:56:04.662546 master-0 kubenswrapper[27820]: W0320 08:56:04.662463 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pode7c15c64_0760_4f92_93f4_294b46732974.slice/crio-418a236b721a2bdab8178fd1526979bf0b35e534fcfd56dffbd6c34b31165aaf WatchSource:0}: Error finding container 418a236b721a2bdab8178fd1526979bf0b35e534fcfd56dffbd6c34b31165aaf: Status 404 returned error can't find the container with id 418a236b721a2bdab8178fd1526979bf0b35e534fcfd56dffbd6c34b31165aaf Mar 20 08:56:04.804314 master-0 kubenswrapper[27820]: I0320 08:56:04.804226 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-89bcb965d-7zclw" event={"ID":"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20","Type":"ContainerStarted","Data":"749c3d811d83c146bebdef1ff108e3cb1617a24030c78e12b09488a78025ff44"} Mar 20 08:56:04.808800 master-0 kubenswrapper[27820]: I0320 08:56:04.808734 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"e7c15c64-0760-4f92-93f4-294b46732974","Type":"ContainerStarted","Data":"418a236b721a2bdab8178fd1526979bf0b35e534fcfd56dffbd6c34b31165aaf"} Mar 20 08:56:04.838322 master-0 kubenswrapper[27820]: I0320 08:56:04.838211 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-89bcb965d-7zclw" podStartSLOduration=2.838162372 podStartE2EDuration="2.838162372s" podCreationTimestamp="2026-03-20 08:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:04.834296717 +0000 UTC m=+374.929505901" watchObservedRunningTime="2026-03-20 08:56:04.838162372 +0000 UTC m=+374.933371536" Mar 20 08:56:05.822390 master-0 kubenswrapper[27820]: I0320 08:56:05.822312 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"e7c15c64-0760-4f92-93f4-294b46732974","Type":"ContainerStarted","Data":"a76c048bee74bf48d9d110914d4cec07579a82574e0056914300ba7132bf55c6"} Mar 20 08:56:13.339310 master-0 kubenswrapper[27820]: I0320 08:56:13.339253 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:13.340003 master-0 kubenswrapper[27820]: I0320 08:56:13.339990 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:13.344444 master-0 kubenswrapper[27820]: I0320 08:56:13.344395 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:13.432376 master-0 kubenswrapper[27820]: I0320 08:56:13.432279 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=10.43224356 podStartE2EDuration="10.43224356s" podCreationTimestamp="2026-03-20 08:56:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:05.853958016 +0000 UTC m=+375.949167240" watchObservedRunningTime="2026-03-20 08:56:13.43224356 +0000 UTC m=+383.527452704" Mar 20 08:56:13.891341 master-0 kubenswrapper[27820]: I0320 08:56:13.890867 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:56:14.220673 master-0 kubenswrapper[27820]: I0320 08:56:14.220536 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7467fcc69-2tx6g"] Mar 20 08:56:15.244958 master-0 kubenswrapper[27820]: I0320 08:56:15.244908 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4"] Mar 20 08:56:15.246338 master-0 kubenswrapper[27820]: I0320 08:56:15.246309 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.248241 master-0 kubenswrapper[27820]: I0320 08:56:15.248216 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-6t7cj" Mar 20 08:56:15.273857 master-0 kubenswrapper[27820]: I0320 08:56:15.273689 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4"] Mar 20 08:56:15.324862 master-0 kubenswrapper[27820]: I0320 08:56:15.324695 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d72g8\" (UniqueName: \"kubernetes.io/projected/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-kube-api-access-d72g8\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.325086 master-0 kubenswrapper[27820]: I0320 08:56:15.324931 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.325086 master-0 kubenswrapper[27820]: I0320 08:56:15.324985 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.426512 master-0 kubenswrapper[27820]: I0320 08:56:15.426452 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.426757 master-0 kubenswrapper[27820]: I0320 08:56:15.426552 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d72g8\" (UniqueName: \"kubernetes.io/projected/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-kube-api-access-d72g8\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.426757 master-0 kubenswrapper[27820]: I0320 08:56:15.426617 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.427373 master-0 kubenswrapper[27820]: I0320 08:56:15.427352 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.427768 master-0 kubenswrapper[27820]: I0320 08:56:15.427698 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.443807 master-0 kubenswrapper[27820]: I0320 08:56:15.443754 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d72g8\" (UniqueName: \"kubernetes.io/projected/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-kube-api-access-d72g8\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:15.572686 master-0 kubenswrapper[27820]: I0320 08:56:15.572568 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:16.088710 master-0 kubenswrapper[27820]: W0320 08:56:16.088679 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda27dbe21_a3c0_4e68_a9fb_1b8007d3ae9a.slice/crio-67fcc01a81e336f0ca16f4f3be8d1064820539dffa0f2d255a3fe25e6ccfa5a1 WatchSource:0}: Error finding container 67fcc01a81e336f0ca16f4f3be8d1064820539dffa0f2d255a3fe25e6ccfa5a1: Status 404 returned error can't find the container with id 67fcc01a81e336f0ca16f4f3be8d1064820539dffa0f2d255a3fe25e6ccfa5a1 Mar 20 08:56:16.097799 master-0 kubenswrapper[27820]: I0320 08:56:16.097552 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4"] Mar 20 08:56:16.920297 master-0 kubenswrapper[27820]: I0320 08:56:16.920225 27820 generic.go:334] "Generic (PLEG): container finished" podID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerID="5730b43d3b5c67564e12c0031d438e0256d57a70afcd95fc8741fa1b7f09c73d" exitCode=0 Mar 20 08:56:16.920862 master-0 kubenswrapper[27820]: I0320 08:56:16.920303 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" event={"ID":"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a","Type":"ContainerDied","Data":"5730b43d3b5c67564e12c0031d438e0256d57a70afcd95fc8741fa1b7f09c73d"} Mar 20 08:56:16.920862 master-0 kubenswrapper[27820]: I0320 08:56:16.920336 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" event={"ID":"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a","Type":"ContainerStarted","Data":"67fcc01a81e336f0ca16f4f3be8d1064820539dffa0f2d255a3fe25e6ccfa5a1"} Mar 20 08:56:16.922253 master-0 kubenswrapper[27820]: I0320 08:56:16.922161 27820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:56:18.937788 master-0 kubenswrapper[27820]: I0320 08:56:18.937636 27820 generic.go:334] "Generic (PLEG): container finished" podID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerID="4eec29d6b7a5306e86b0e9a3e7e2728a77d965e8535d5f3dc4ccdb0b842aea96" exitCode=0 Mar 20 08:56:18.937788 master-0 kubenswrapper[27820]: I0320 08:56:18.937685 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" event={"ID":"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a","Type":"ContainerDied","Data":"4eec29d6b7a5306e86b0e9a3e7e2728a77d965e8535d5f3dc4ccdb0b842aea96"} Mar 20 08:56:19.948475 master-0 kubenswrapper[27820]: I0320 08:56:19.948407 27820 generic.go:334] "Generic (PLEG): container finished" podID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerID="e4fe032ec73fafd3232ef042840a7d91bf084d677dc5d39e999cff7f905090ff" exitCode=0 Mar 20 08:56:19.948987 master-0 kubenswrapper[27820]: I0320 08:56:19.948468 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" event={"ID":"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a","Type":"ContainerDied","Data":"e4fe032ec73fafd3232ef042840a7d91bf084d677dc5d39e999cff7f905090ff"} Mar 20 08:56:21.261686 master-0 kubenswrapper[27820]: I0320 08:56:21.261627 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:21.355488 master-0 kubenswrapper[27820]: I0320 08:56:21.355415 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-util\") pod \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " Mar 20 08:56:21.355488 master-0 kubenswrapper[27820]: I0320 08:56:21.355482 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-bundle\") pod \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " Mar 20 08:56:21.355771 master-0 kubenswrapper[27820]: I0320 08:56:21.355611 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d72g8\" (UniqueName: \"kubernetes.io/projected/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-kube-api-access-d72g8\") pod \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\" (UID: \"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a\") " Mar 20 08:56:21.358223 master-0 kubenswrapper[27820]: I0320 08:56:21.358165 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-bundle" (OuterVolumeSpecName: "bundle") pod "a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" (UID: "a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:56:21.360664 master-0 kubenswrapper[27820]: I0320 08:56:21.360625 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-kube-api-access-d72g8" (OuterVolumeSpecName: "kube-api-access-d72g8") pod "a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" (UID: "a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a"). InnerVolumeSpecName "kube-api-access-d72g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:21.389663 master-0 kubenswrapper[27820]: I0320 08:56:21.389590 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-util" (OuterVolumeSpecName: "util") pod "a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" (UID: "a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:56:21.457756 master-0 kubenswrapper[27820]: I0320 08:56:21.457705 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d72g8\" (UniqueName: \"kubernetes.io/projected/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-kube-api-access-d72g8\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:21.457756 master-0 kubenswrapper[27820]: I0320 08:56:21.457746 27820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-util\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:21.457756 master-0 kubenswrapper[27820]: I0320 08:56:21.457755 27820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:21.968919 master-0 kubenswrapper[27820]: I0320 08:56:21.968843 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" event={"ID":"a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a","Type":"ContainerDied","Data":"67fcc01a81e336f0ca16f4f3be8d1064820539dffa0f2d255a3fe25e6ccfa5a1"} Mar 20 08:56:21.969386 master-0 kubenswrapper[27820]: I0320 08:56:21.969350 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fcc01a81e336f0ca16f4f3be8d1064820539dffa0f2d255a3fe25e6ccfa5a1" Mar 20 08:56:21.969596 master-0 kubenswrapper[27820]: I0320 08:56:21.969094 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d4bn9z4" Mar 20 08:56:28.395828 master-0 kubenswrapper[27820]: I0320 08:56:28.395760 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-7d8cc545d-7wshw"] Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: E0320 08:56:28.396129 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="extract" Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: I0320 08:56:28.396148 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="extract" Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: E0320 08:56:28.396182 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="util" Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: I0320 08:56:28.396191 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="util" Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: E0320 08:56:28.396209 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="pull" Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: I0320 08:56:28.396219 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="pull" Mar 20 08:56:28.396526 master-0 kubenswrapper[27820]: I0320 08:56:28.396396 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a27dbe21-a3c0-4e68-a9fb-1b8007d3ae9a" containerName="extract" Mar 20 08:56:28.397004 master-0 kubenswrapper[27820]: I0320 08:56:28.396974 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.398816 master-0 kubenswrapper[27820]: I0320 08:56:28.398769 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 20 08:56:28.399091 master-0 kubenswrapper[27820]: I0320 08:56:28.399065 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 20 08:56:28.399230 master-0 kubenswrapper[27820]: I0320 08:56:28.399209 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 20 08:56:28.399675 master-0 kubenswrapper[27820]: I0320 08:56:28.399651 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 20 08:56:28.401526 master-0 kubenswrapper[27820]: I0320 08:56:28.401489 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 20 08:56:28.409887 master-0 kubenswrapper[27820]: I0320 08:56:28.409824 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7d8cc545d-7wshw"] Mar 20 08:56:28.483182 master-0 kubenswrapper[27820]: I0320 08:56:28.483097 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28bkw\" (UniqueName: \"kubernetes.io/projected/82dee58e-70ab-4181-a0de-fc61333727d9-kube-api-access-28bkw\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.483182 master-0 kubenswrapper[27820]: I0320 08:56:28.483166 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-apiservice-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.483511 master-0 kubenswrapper[27820]: I0320 08:56:28.483211 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-metrics-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.483511 master-0 kubenswrapper[27820]: I0320 08:56:28.483297 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/82dee58e-70ab-4181-a0de-fc61333727d9-socket-dir\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.483511 master-0 kubenswrapper[27820]: I0320 08:56:28.483339 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-webhook-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.584450 master-0 kubenswrapper[27820]: I0320 08:56:28.584367 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28bkw\" (UniqueName: \"kubernetes.io/projected/82dee58e-70ab-4181-a0de-fc61333727d9-kube-api-access-28bkw\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.584450 master-0 kubenswrapper[27820]: I0320 08:56:28.584430 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-apiservice-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.584450 master-0 kubenswrapper[27820]: I0320 08:56:28.584462 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-metrics-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.584986 master-0 kubenswrapper[27820]: I0320 08:56:28.584954 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/82dee58e-70ab-4181-a0de-fc61333727d9-socket-dir\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.585123 master-0 kubenswrapper[27820]: I0320 08:56:28.585058 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-webhook-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.585524 master-0 kubenswrapper[27820]: I0320 08:56:28.585487 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/82dee58e-70ab-4181-a0de-fc61333727d9-socket-dir\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.588066 master-0 kubenswrapper[27820]: I0320 08:56:28.588008 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-apiservice-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.588225 master-0 kubenswrapper[27820]: I0320 08:56:28.588183 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-metrics-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.593704 master-0 kubenswrapper[27820]: I0320 08:56:28.592435 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/82dee58e-70ab-4181-a0de-fc61333727d9-webhook-cert\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.610599 master-0 kubenswrapper[27820]: I0320 08:56:28.610545 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28bkw\" (UniqueName: \"kubernetes.io/projected/82dee58e-70ab-4181-a0de-fc61333727d9-kube-api-access-28bkw\") pod \"lvms-operator-7d8cc545d-7wshw\" (UID: \"82dee58e-70ab-4181-a0de-fc61333727d9\") " pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:28.714561 master-0 kubenswrapper[27820]: I0320 08:56:28.714357 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:29.143635 master-0 kubenswrapper[27820]: I0320 08:56:29.143577 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-7d8cc545d-7wshw"] Mar 20 08:56:30.028428 master-0 kubenswrapper[27820]: I0320 08:56:30.028362 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" event={"ID":"82dee58e-70ab-4181-a0de-fc61333727d9","Type":"ContainerStarted","Data":"82f15871e140f017b04443cbb7f01cf9288bbcd964e0ac866f8eca126481d172"} Mar 20 08:56:34.012069 master-0 kubenswrapper[27820]: I0320 08:56:34.011991 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:56:34.048873 master-0 kubenswrapper[27820]: I0320 08:56:34.048812 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:56:34.129206 master-0 kubenswrapper[27820]: I0320 08:56:34.129139 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 20 08:56:35.105644 master-0 kubenswrapper[27820]: I0320 08:56:35.105606 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" event={"ID":"82dee58e-70ab-4181-a0de-fc61333727d9","Type":"ContainerStarted","Data":"ffaf6f7c0fde1f7fb7ad92a1982605990bef9422eed5e0e2f7a52d844fa9d30c"} Mar 20 08:56:35.106191 master-0 kubenswrapper[27820]: I0320 08:56:35.106171 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:35.108803 master-0 kubenswrapper[27820]: I0320 08:56:35.108770 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" Mar 20 08:56:35.576466 master-0 kubenswrapper[27820]: I0320 08:56:35.576318 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-7d8cc545d-7wshw" podStartSLOduration=2.448161838 podStartE2EDuration="7.576291725s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="2026-03-20 08:56:29.149503581 +0000 UTC m=+399.244712725" lastFinishedPulling="2026-03-20 08:56:34.277633468 +0000 UTC m=+404.372842612" observedRunningTime="2026-03-20 08:56:35.568345279 +0000 UTC m=+405.663554513" watchObservedRunningTime="2026-03-20 08:56:35.576291725 +0000 UTC m=+405.671500869" Mar 20 08:56:37.911576 master-0 kubenswrapper[27820]: I0320 08:56:37.911509 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:56:37.916309 master-0 kubenswrapper[27820]: I0320 08:56:37.916236 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"installer-3-retry-1-master-0\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " pod="openshift-kube-controller-manager/installer-3-retry-1-master-0" Mar 20 08:56:37.986827 master-0 kubenswrapper[27820]: I0320 08:56:37.986757 27820 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:56:37.987316 master-0 kubenswrapper[27820]: I0320 08:56:37.987180 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="cluster-policy-controller" containerID="cri-o://f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc" gracePeriod=30 Mar 20 08:56:37.987476 master-0 kubenswrapper[27820]: I0320 08:56:37.987301 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" containerID="cri-o://2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d" gracePeriod=30 Mar 20 08:56:37.987476 master-0 kubenswrapper[27820]: I0320 08:56:37.987427 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad" gracePeriod=30 Mar 20 08:56:37.987665 master-0 kubenswrapper[27820]: I0320 08:56:37.987500 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923" gracePeriod=30 Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: I0320 08:56:37.988124 27820 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: E0320 08:56:37.988714 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: I0320 08:56:37.988749 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: E0320 08:56:37.988813 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: I0320 08:56:37.988833 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: E0320 08:56:37.988860 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-recovery-controller" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: I0320 08:56:37.988879 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-recovery-controller" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: E0320 08:56:37.988901 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-cert-syncer" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: I0320 08:56:37.988921 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-cert-syncer" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: E0320 08:56:37.988969 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="cluster-policy-controller" Mar 20 08:56:37.989166 master-0 kubenswrapper[27820]: I0320 08:56:37.988988 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="cluster-policy-controller" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: I0320 08:56:37.989337 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-cert-syncer" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: I0320 08:56:37.989378 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: I0320 08:56:37.989411 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: I0320 08:56:37.989453 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="cluster-policy-controller" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: I0320 08:56:37.989524 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager-recovery-controller" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: E0320 08:56:37.989844 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.990187 master-0 kubenswrapper[27820]: I0320 08:56:37.989866 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:37.992389 master-0 kubenswrapper[27820]: I0320 08:56:37.990291 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="36f4a012744c6465102d09cc67ac63e6" containerName="kube-controller-manager" Mar 20 08:56:38.013166 master-0 kubenswrapper[27820]: I0320 08:56:38.013092 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") pod \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\" (UID: \"75cef5aa-93e6-4b8b-9ab1-06809e85883a\") " Mar 20 08:56:38.013707 master-0 kubenswrapper[27820]: I0320 08:56:38.013652 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/81441e014342eafdc07cc934660f5a5b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"81441e014342eafdc07cc934660f5a5b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.014100 master-0 kubenswrapper[27820]: I0320 08:56:38.014033 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/81441e014342eafdc07cc934660f5a5b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"81441e014342eafdc07cc934660f5a5b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.028912 master-0 kubenswrapper[27820]: I0320 08:56:38.028791 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "75cef5aa-93e6-4b8b-9ab1-06809e85883a" (UID: "75cef5aa-93e6-4b8b-9ab1-06809e85883a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:38.116216 master-0 kubenswrapper[27820]: I0320 08:56:38.116166 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/81441e014342eafdc07cc934660f5a5b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"81441e014342eafdc07cc934660f5a5b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.116616 master-0 kubenswrapper[27820]: I0320 08:56:38.116354 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/81441e014342eafdc07cc934660f5a5b-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"81441e014342eafdc07cc934660f5a5b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.116616 master-0 kubenswrapper[27820]: I0320 08:56:38.116588 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/81441e014342eafdc07cc934660f5a5b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"81441e014342eafdc07cc934660f5a5b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.116820 master-0 kubenswrapper[27820]: I0320 08:56:38.116796 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/81441e014342eafdc07cc934660f5a5b-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"81441e014342eafdc07cc934660f5a5b\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.117556 master-0 kubenswrapper[27820]: I0320 08:56:38.117527 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/75cef5aa-93e6-4b8b-9ab1-06809e85883a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:38.129761 master-0 kubenswrapper[27820]: I0320 08:56:38.129723 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager/1.log" Mar 20 08:56:38.130719 master-0 kubenswrapper[27820]: I0320 08:56:38.130697 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager-cert-syncer/0.log" Mar 20 08:56:38.131208 master-0 kubenswrapper[27820]: I0320 08:56:38.131173 27820 generic.go:334] "Generic (PLEG): container finished" podID="36f4a012744c6465102d09cc67ac63e6" containerID="2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d" exitCode=0 Mar 20 08:56:38.131208 master-0 kubenswrapper[27820]: I0320 08:56:38.131204 27820 generic.go:334] "Generic (PLEG): container finished" podID="36f4a012744c6465102d09cc67ac63e6" containerID="14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923" exitCode=2 Mar 20 08:56:38.131359 master-0 kubenswrapper[27820]: I0320 08:56:38.131236 27820 scope.go:117] "RemoveContainer" containerID="bf07c7b6cfd6872c0d4b0145eea2b021b2c13119eb8e210c245ed1f17613dab1" Mar 20 08:56:38.586977 master-0 kubenswrapper[27820]: I0320 08:56:38.586835 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager-cert-syncer/0.log" Mar 20 08:56:38.590312 master-0 kubenswrapper[27820]: I0320 08:56:38.588089 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:38.598134 master-0 kubenswrapper[27820]: I0320 08:56:38.598044 27820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="36f4a012744c6465102d09cc67ac63e6" podUID="81441e014342eafdc07cc934660f5a5b" Mar 20 08:56:38.624724 master-0 kubenswrapper[27820]: I0320 08:56:38.624575 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") pod \"36f4a012744c6465102d09cc67ac63e6\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " Mar 20 08:56:38.624724 master-0 kubenswrapper[27820]: I0320 08:56:38.624651 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") pod \"36f4a012744c6465102d09cc67ac63e6\" (UID: \"36f4a012744c6465102d09cc67ac63e6\") " Mar 20 08:56:38.625008 master-0 kubenswrapper[27820]: I0320 08:56:38.624739 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "36f4a012744c6465102d09cc67ac63e6" (UID: "36f4a012744c6465102d09cc67ac63e6"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:56:38.625008 master-0 kubenswrapper[27820]: I0320 08:56:38.624893 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "36f4a012744c6465102d09cc67ac63e6" (UID: "36f4a012744c6465102d09cc67ac63e6"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:56:38.625291 master-0 kubenswrapper[27820]: I0320 08:56:38.625239 27820 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:38.625291 master-0 kubenswrapper[27820]: I0320 08:56:38.625277 27820 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36f4a012744c6465102d09cc67ac63e6-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:39.144399 master-0 kubenswrapper[27820]: I0320 08:56:39.144348 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_36f4a012744c6465102d09cc67ac63e6/kube-controller-manager-cert-syncer/0.log" Mar 20 08:56:39.145048 master-0 kubenswrapper[27820]: I0320 08:56:39.144932 27820 generic.go:334] "Generic (PLEG): container finished" podID="36f4a012744c6465102d09cc67ac63e6" containerID="84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad" exitCode=0 Mar 20 08:56:39.145048 master-0 kubenswrapper[27820]: I0320 08:56:39.144955 27820 generic.go:334] "Generic (PLEG): container finished" podID="36f4a012744c6465102d09cc67ac63e6" containerID="f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc" exitCode=0 Mar 20 08:56:39.145048 master-0 kubenswrapper[27820]: I0320 08:56:39.145006 27820 scope.go:117] "RemoveContainer" containerID="2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d" Mar 20 08:56:39.145243 master-0 kubenswrapper[27820]: I0320 08:56:39.145102 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:39.150038 master-0 kubenswrapper[27820]: I0320 08:56:39.150013 27820 generic.go:334] "Generic (PLEG): container finished" podID="e7c15c64-0760-4f92-93f4-294b46732974" containerID="a76c048bee74bf48d9d110914d4cec07579a82574e0056914300ba7132bf55c6" exitCode=0 Mar 20 08:56:39.150131 master-0 kubenswrapper[27820]: I0320 08:56:39.150045 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"e7c15c64-0760-4f92-93f4-294b46732974","Type":"ContainerDied","Data":"a76c048bee74bf48d9d110914d4cec07579a82574e0056914300ba7132bf55c6"} Mar 20 08:56:39.168565 master-0 kubenswrapper[27820]: I0320 08:56:39.168523 27820 scope.go:117] "RemoveContainer" containerID="84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad" Mar 20 08:56:39.183923 master-0 kubenswrapper[27820]: I0320 08:56:39.183852 27820 scope.go:117] "RemoveContainer" containerID="14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923" Mar 20 08:56:39.274649 master-0 kubenswrapper[27820]: I0320 08:56:39.274513 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7467fcc69-2tx6g" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerName="console" containerID="cri-o://3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28" gracePeriod=15 Mar 20 08:56:39.286431 master-0 kubenswrapper[27820]: I0320 08:56:39.286397 27820 scope.go:117] "RemoveContainer" containerID="f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc" Mar 20 08:56:39.303352 master-0 kubenswrapper[27820]: I0320 08:56:39.303321 27820 scope.go:117] "RemoveContainer" containerID="2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d" Mar 20 08:56:39.303882 master-0 kubenswrapper[27820]: E0320 08:56:39.303861 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d\": container with ID starting with 2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d not found: ID does not exist" containerID="2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d" Mar 20 08:56:39.303954 master-0 kubenswrapper[27820]: I0320 08:56:39.303892 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d"} err="failed to get container status \"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d\": rpc error: code = NotFound desc = could not find container \"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d\": container with ID starting with 2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d not found: ID does not exist" Mar 20 08:56:39.303954 master-0 kubenswrapper[27820]: I0320 08:56:39.303915 27820 scope.go:117] "RemoveContainer" containerID="84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad" Mar 20 08:56:39.304379 master-0 kubenswrapper[27820]: E0320 08:56:39.304360 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad\": container with ID starting with 84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad not found: ID does not exist" containerID="84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad" Mar 20 08:56:39.304448 master-0 kubenswrapper[27820]: I0320 08:56:39.304380 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad"} err="failed to get container status \"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad\": rpc error: code = NotFound desc = could not find container \"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad\": container with ID starting with 84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad not found: ID does not exist" Mar 20 08:56:39.304448 master-0 kubenswrapper[27820]: I0320 08:56:39.304391 27820 scope.go:117] "RemoveContainer" containerID="14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923" Mar 20 08:56:39.304696 master-0 kubenswrapper[27820]: E0320 08:56:39.304662 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923\": container with ID starting with 14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923 not found: ID does not exist" containerID="14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923" Mar 20 08:56:39.304745 master-0 kubenswrapper[27820]: I0320 08:56:39.304691 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923"} err="failed to get container status \"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923\": rpc error: code = NotFound desc = could not find container \"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923\": container with ID starting with 14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923 not found: ID does not exist" Mar 20 08:56:39.304745 master-0 kubenswrapper[27820]: I0320 08:56:39.304714 27820 scope.go:117] "RemoveContainer" containerID="f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc" Mar 20 08:56:39.305100 master-0 kubenswrapper[27820]: E0320 08:56:39.305083 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc\": container with ID starting with f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc not found: ID does not exist" containerID="f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc" Mar 20 08:56:39.305232 master-0 kubenswrapper[27820]: I0320 08:56:39.305213 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc"} err="failed to get container status \"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc\": rpc error: code = NotFound desc = could not find container \"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc\": container with ID starting with f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc not found: ID does not exist" Mar 20 08:56:39.305324 master-0 kubenswrapper[27820]: I0320 08:56:39.305313 27820 scope.go:117] "RemoveContainer" containerID="2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d" Mar 20 08:56:39.305650 master-0 kubenswrapper[27820]: I0320 08:56:39.305633 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d"} err="failed to get container status \"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d\": rpc error: code = NotFound desc = could not find container \"2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d\": container with ID starting with 2c7410ac571b5d8f687ca8237762ffb0d8f08a0cac5f528ef47648af5031378d not found: ID does not exist" Mar 20 08:56:39.305752 master-0 kubenswrapper[27820]: I0320 08:56:39.305739 27820 scope.go:117] "RemoveContainer" containerID="84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad" Mar 20 08:56:39.306091 master-0 kubenswrapper[27820]: I0320 08:56:39.306070 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad"} err="failed to get container status \"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad\": rpc error: code = NotFound desc = could not find container \"84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad\": container with ID starting with 84a924d537370693d461e98a1a2e4f9eb016680a1dfc186ea4daea13912e27ad not found: ID does not exist" Mar 20 08:56:39.306197 master-0 kubenswrapper[27820]: I0320 08:56:39.306184 27820 scope.go:117] "RemoveContainer" containerID="14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923" Mar 20 08:56:39.306605 master-0 kubenswrapper[27820]: I0320 08:56:39.306584 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923"} err="failed to get container status \"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923\": rpc error: code = NotFound desc = could not find container \"14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923\": container with ID starting with 14296de365c2f4bb39d5e8cbebd68adb12075d499115c76e96d178fea32c5923 not found: ID does not exist" Mar 20 08:56:39.306725 master-0 kubenswrapper[27820]: I0320 08:56:39.306708 27820 scope.go:117] "RemoveContainer" containerID="f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc" Mar 20 08:56:39.307253 master-0 kubenswrapper[27820]: I0320 08:56:39.307231 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc"} err="failed to get container status \"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc\": rpc error: code = NotFound desc = could not find container \"f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc\": container with ID starting with f32e2c8224cd03fad7bde129086e67d6d1c4925ed398cbb871ceb3d8e790c7dc not found: ID does not exist" Mar 20 08:56:39.465671 master-0 kubenswrapper[27820]: I0320 08:56:39.465576 27820 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="36f4a012744c6465102d09cc67ac63e6" podUID="81441e014342eafdc07cc934660f5a5b" Mar 20 08:56:39.880809 master-0 kubenswrapper[27820]: I0320 08:56:39.880666 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7467fcc69-2tx6g_56f17a00-2a28-4406-84ed-40a2a5eecd15/console/0.log" Mar 20 08:56:39.880809 master-0 kubenswrapper[27820]: I0320 08:56:39.880737 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:56:39.985903 master-0 kubenswrapper[27820]: I0320 08:56:39.984945 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-service-ca\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.986230 master-0 kubenswrapper[27820]: I0320 08:56:39.985775 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-service-ca" (OuterVolumeSpecName: "service-ca") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:39.986230 master-0 kubenswrapper[27820]: I0320 08:56:39.986052 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-config\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.986230 master-0 kubenswrapper[27820]: I0320 08:56:39.986160 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rm64s\" (UniqueName: \"kubernetes.io/projected/56f17a00-2a28-4406-84ed-40a2a5eecd15-kube-api-access-rm64s\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.986814 master-0 kubenswrapper[27820]: I0320 08:56:39.986746 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-config" (OuterVolumeSpecName: "console-config") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:39.987201 master-0 kubenswrapper[27820]: I0320 08:56:39.987133 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-oauth-serving-cert\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.987544 master-0 kubenswrapper[27820]: I0320 08:56:39.987415 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-trusted-ca-bundle\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.987777 master-0 kubenswrapper[27820]: I0320 08:56:39.987728 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-oauth-config\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.987894 master-0 kubenswrapper[27820]: I0320 08:56:39.987805 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-serving-cert\") pod \"56f17a00-2a28-4406-84ed-40a2a5eecd15\" (UID: \"56f17a00-2a28-4406-84ed-40a2a5eecd15\") " Mar 20 08:56:39.988386 master-0 kubenswrapper[27820]: I0320 08:56:39.988240 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:39.988386 master-0 kubenswrapper[27820]: I0320 08:56:39.988298 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:39.988750 master-0 kubenswrapper[27820]: I0320 08:56:39.988687 27820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:39.988869 master-0 kubenswrapper[27820]: I0320 08:56:39.988755 27820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:39.988869 master-0 kubenswrapper[27820]: I0320 08:56:39.988783 27820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:39.988869 master-0 kubenswrapper[27820]: I0320 08:56:39.988810 27820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/56f17a00-2a28-4406-84ed-40a2a5eecd15-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:39.989374 master-0 kubenswrapper[27820]: I0320 08:56:39.989255 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56f17a00-2a28-4406-84ed-40a2a5eecd15-kube-api-access-rm64s" (OuterVolumeSpecName: "kube-api-access-rm64s") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "kube-api-access-rm64s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:39.991598 master-0 kubenswrapper[27820]: I0320 08:56:39.991545 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:39.992132 master-0 kubenswrapper[27820]: I0320 08:56:39.992054 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "56f17a00-2a28-4406-84ed-40a2a5eecd15" (UID: "56f17a00-2a28-4406-84ed-40a2a5eecd15"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:40.086624 master-0 kubenswrapper[27820]: I0320 08:56:40.086554 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36f4a012744c6465102d09cc67ac63e6" path="/var/lib/kubelet/pods/36f4a012744c6465102d09cc67ac63e6/volumes" Mar 20 08:56:40.090734 master-0 kubenswrapper[27820]: I0320 08:56:40.090686 27820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:40.090734 master-0 kubenswrapper[27820]: I0320 08:56:40.090721 27820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/56f17a00-2a28-4406-84ed-40a2a5eecd15-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:40.090734 master-0 kubenswrapper[27820]: I0320 08:56:40.090731 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rm64s\" (UniqueName: \"kubernetes.io/projected/56f17a00-2a28-4406-84ed-40a2a5eecd15-kube-api-access-rm64s\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:40.163326 master-0 kubenswrapper[27820]: I0320 08:56:40.161683 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7467fcc69-2tx6g_56f17a00-2a28-4406-84ed-40a2a5eecd15/console/0.log" Mar 20 08:56:40.163326 master-0 kubenswrapper[27820]: I0320 08:56:40.161783 27820 generic.go:334] "Generic (PLEG): container finished" podID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerID="3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28" exitCode=2 Mar 20 08:56:40.163326 master-0 kubenswrapper[27820]: I0320 08:56:40.161867 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7467fcc69-2tx6g" Mar 20 08:56:40.163326 master-0 kubenswrapper[27820]: I0320 08:56:40.161908 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7467fcc69-2tx6g" event={"ID":"56f17a00-2a28-4406-84ed-40a2a5eecd15","Type":"ContainerDied","Data":"3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28"} Mar 20 08:56:40.163326 master-0 kubenswrapper[27820]: I0320 08:56:40.161951 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7467fcc69-2tx6g" event={"ID":"56f17a00-2a28-4406-84ed-40a2a5eecd15","Type":"ContainerDied","Data":"b192616324d89671cc76043aa3c91511b9eaea6bd421ac5d8924f945e7a0fbed"} Mar 20 08:56:40.163326 master-0 kubenswrapper[27820]: I0320 08:56:40.161981 27820 scope.go:117] "RemoveContainer" containerID="3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28" Mar 20 08:56:40.181347 master-0 kubenswrapper[27820]: I0320 08:56:40.181288 27820 scope.go:117] "RemoveContainer" containerID="3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28" Mar 20 08:56:40.181797 master-0 kubenswrapper[27820]: E0320 08:56:40.181752 27820 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28\": container with ID starting with 3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28 not found: ID does not exist" containerID="3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28" Mar 20 08:56:40.181868 master-0 kubenswrapper[27820]: I0320 08:56:40.181791 27820 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28"} err="failed to get container status \"3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28\": rpc error: code = NotFound desc = could not find container \"3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28\": container with ID starting with 3405d298d7f2398e075bfa245927865764216be9d163c0ef4a5d6230b5504b28 not found: ID does not exist" Mar 20 08:56:40.524229 master-0 kubenswrapper[27820]: I0320 08:56:40.524139 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:40.599647 master-0 kubenswrapper[27820]: I0320 08:56:40.599574 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-kubelet-dir\") pod \"e7c15c64-0760-4f92-93f4-294b46732974\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " Mar 20 08:56:40.599871 master-0 kubenswrapper[27820]: I0320 08:56:40.599744 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-var-lock\") pod \"e7c15c64-0760-4f92-93f4-294b46732974\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " Mar 20 08:56:40.599871 master-0 kubenswrapper[27820]: I0320 08:56:40.599784 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7c15c64-0760-4f92-93f4-294b46732974-kube-api-access\") pod \"e7c15c64-0760-4f92-93f4-294b46732974\" (UID: \"e7c15c64-0760-4f92-93f4-294b46732974\") " Mar 20 08:56:40.599871 master-0 kubenswrapper[27820]: I0320 08:56:40.599823 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "e7c15c64-0760-4f92-93f4-294b46732974" (UID: "e7c15c64-0760-4f92-93f4-294b46732974"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:56:40.600015 master-0 kubenswrapper[27820]: I0320 08:56:40.599889 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-var-lock" (OuterVolumeSpecName: "var-lock") pod "e7c15c64-0760-4f92-93f4-294b46732974" (UID: "e7c15c64-0760-4f92-93f4-294b46732974"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:56:40.600224 master-0 kubenswrapper[27820]: I0320 08:56:40.600188 27820 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:40.600224 master-0 kubenswrapper[27820]: I0320 08:56:40.600211 27820 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/e7c15c64-0760-4f92-93f4-294b46732974-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:40.603846 master-0 kubenswrapper[27820]: I0320 08:56:40.603804 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c15c64-0760-4f92-93f4-294b46732974-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7c15c64-0760-4f92-93f4-294b46732974" (UID: "e7c15c64-0760-4f92-93f4-294b46732974"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:40.702483 master-0 kubenswrapper[27820]: I0320 08:56:40.702408 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7c15c64-0760-4f92-93f4-294b46732974-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:41.172867 master-0 kubenswrapper[27820]: I0320 08:56:41.172746 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"e7c15c64-0760-4f92-93f4-294b46732974","Type":"ContainerDied","Data":"418a236b721a2bdab8178fd1526979bf0b35e534fcfd56dffbd6c34b31165aaf"} Mar 20 08:56:41.172867 master-0 kubenswrapper[27820]: I0320 08:56:41.172853 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="418a236b721a2bdab8178fd1526979bf0b35e534fcfd56dffbd6c34b31165aaf" Mar 20 08:56:41.173624 master-0 kubenswrapper[27820]: I0320 08:56:41.172785 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 20 08:56:41.327183 master-0 kubenswrapper[27820]: I0320 08:56:41.327111 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7467fcc69-2tx6g"] Mar 20 08:56:41.433987 master-0 kubenswrapper[27820]: I0320 08:56:41.433790 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7467fcc69-2tx6g"] Mar 20 08:56:42.091020 master-0 kubenswrapper[27820]: I0320 08:56:42.090912 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" path="/var/lib/kubelet/pods/56f17a00-2a28-4406-84ed-40a2a5eecd15/volumes" Mar 20 08:56:43.263206 master-0 kubenswrapper[27820]: I0320 08:56:43.263090 27820 generic.go:334] "Generic (PLEG): container finished" podID="04466971-127b-403e-af45-dad97b6e0c87" containerID="b842607819c12e2c961d4115971433a287618e424b9b5e836fdeed85d90e9244" exitCode=0 Mar 20 08:56:43.263206 master-0 kubenswrapper[27820]: I0320 08:56:43.263200 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" event={"ID":"04466971-127b-403e-af45-dad97b6e0c87","Type":"ContainerDied","Data":"b842607819c12e2c961d4115971433a287618e424b9b5e836fdeed85d90e9244"} Mar 20 08:56:43.586646 master-0 kubenswrapper[27820]: I0320 08:56:43.586344 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:56:43.657980 master-0 kubenswrapper[27820]: I0320 08:56:43.657913 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.665606 master-0 kubenswrapper[27820]: I0320 08:56:43.665535 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls" (OuterVolumeSpecName: "secret-metrics-server-tls") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "secret-metrics-server-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:43.758969 master-0 kubenswrapper[27820]: I0320 08:56:43.758921 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.759338 master-0 kubenswrapper[27820]: I0320 08:56:43.759315 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.759512 master-0 kubenswrapper[27820]: I0320 08:56:43.759494 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.759676 master-0 kubenswrapper[27820]: I0320 08:56:43.759655 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.759796 master-0 kubenswrapper[27820]: I0320 08:56:43.759779 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.759909 master-0 kubenswrapper[27820]: I0320 08:56:43.759570 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles" (OuterVolumeSpecName: "metrics-server-audit-profiles") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "metrics-server-audit-profiles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:43.759993 master-0 kubenswrapper[27820]: I0320 08:56:43.759728 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:43.759993 master-0 kubenswrapper[27820]: I0320 08:56:43.759856 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log" (OuterVolumeSpecName: "audit-log") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "audit-log". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:56:43.760222 master-0 kubenswrapper[27820]: I0320 08:56:43.760194 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") pod \"04466971-127b-403e-af45-dad97b6e0c87\" (UID: \"04466971-127b-403e-af45-dad97b6e0c87\") " Mar 20 08:56:43.760751 master-0 kubenswrapper[27820]: I0320 08:56:43.760721 27820 reconciler_common.go:293] "Volume detached for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/04466971-127b-403e-af45-dad97b6e0c87-audit-log\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:43.760902 master-0 kubenswrapper[27820]: I0320 08:56:43.760877 27820 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-server-tls\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:43.761036 master-0 kubenswrapper[27820]: I0320 08:56:43.761015 27820 reconciler_common.go:293] "Volume detached for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-metrics-server-audit-profiles\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:43.761165 master-0 kubenswrapper[27820]: I0320 08:56:43.761143 27820 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04466971-127b-403e-af45-dad97b6e0c87-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:43.763041 master-0 kubenswrapper[27820]: I0320 08:56:43.762992 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f" (OuterVolumeSpecName: "kube-api-access-wkh2f") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "kube-api-access-wkh2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:43.763309 master-0 kubenswrapper[27820]: I0320 08:56:43.763256 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:43.764035 master-0 kubenswrapper[27820]: I0320 08:56:43.763998 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle" (OuterVolumeSpecName: "client-ca-bundle") pod "04466971-127b-403e-af45-dad97b6e0c87" (UID: "04466971-127b-403e-af45-dad97b6e0c87"). InnerVolumeSpecName "client-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:43.863169 master-0 kubenswrapper[27820]: I0320 08:56:43.863085 27820 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:43.863169 master-0 kubenswrapper[27820]: I0320 08:56:43.863172 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkh2f\" (UniqueName: \"kubernetes.io/projected/04466971-127b-403e-af45-dad97b6e0c87-kube-api-access-wkh2f\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:43.863530 master-0 kubenswrapper[27820]: I0320 08:56:43.863190 27820 reconciler_common.go:293] "Volume detached for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04466971-127b-403e-af45-dad97b6e0c87-client-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:56:44.273764 master-0 kubenswrapper[27820]: I0320 08:56:44.273671 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" event={"ID":"04466971-127b-403e-af45-dad97b6e0c87","Type":"ContainerDied","Data":"46c99f0233d1af208b38b52f2ff5b680b12b4851bb3db1577a37ab4de1879e97"} Mar 20 08:56:44.273764 master-0 kubenswrapper[27820]: I0320 08:56:44.273743 27820 scope.go:117] "RemoveContainer" containerID="b842607819c12e2c961d4115971433a287618e424b9b5e836fdeed85d90e9244" Mar 20 08:56:44.274472 master-0 kubenswrapper[27820]: I0320 08:56:44.273858 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-55d84d7794-56n4c" Mar 20 08:56:44.304907 master-0 kubenswrapper[27820]: I0320 08:56:44.304835 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/metrics-server-55d84d7794-56n4c"] Mar 20 08:56:44.310474 master-0 kubenswrapper[27820]: I0320 08:56:44.310395 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/metrics-server-55d84d7794-56n4c"] Mar 20 08:56:46.083561 master-0 kubenswrapper[27820]: I0320 08:56:46.083473 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04466971-127b-403e-af45-dad97b6e0c87" path="/var/lib/kubelet/pods/04466971-127b-403e-af45-dad97b6e0c87/volumes" Mar 20 08:56:52.075043 master-0 kubenswrapper[27820]: I0320 08:56:52.074849 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:52.105782 master-0 kubenswrapper[27820]: I0320 08:56:52.105724 27820 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ca47a902-86be-42ce-84c8-13c979619ef0" Mar 20 08:56:52.105782 master-0 kubenswrapper[27820]: I0320 08:56:52.105768 27820 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="ca47a902-86be-42ce-84c8-13c979619ef0" Mar 20 08:56:53.497974 master-0 kubenswrapper[27820]: I0320 08:56:53.497893 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:56:53.752588 master-0 kubenswrapper[27820]: I0320 08:56:53.752414 27820 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:53.761727 master-0 kubenswrapper[27820]: I0320 08:56:53.761621 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:56:53.937369 master-0 kubenswrapper[27820]: I0320 08:56:53.937154 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:56:53.941372 master-0 kubenswrapper[27820]: I0320 08:56:53.941279 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 20 08:56:54.378821 master-0 kubenswrapper[27820]: I0320 08:56:54.378756 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"81441e014342eafdc07cc934660f5a5b","Type":"ContainerStarted","Data":"901d2689af92f6ab00eb04194d9378f00e9e9425067a7fad8b6f538faff05206"} Mar 20 08:56:54.378821 master-0 kubenswrapper[27820]: I0320 08:56:54.378822 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"81441e014342eafdc07cc934660f5a5b","Type":"ContainerStarted","Data":"51ea0e28f38be94f8fb90947652d3567372be904b97840d7e0a22d6eb603425e"} Mar 20 08:56:55.392552 master-0 kubenswrapper[27820]: I0320 08:56:55.389546 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"81441e014342eafdc07cc934660f5a5b","Type":"ContainerStarted","Data":"b5f2cc6795ad90b063c4051318b678d00f8d7e94bfaef94bd9486835d2d5b3de"} Mar 20 08:56:55.392552 master-0 kubenswrapper[27820]: I0320 08:56:55.389595 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"81441e014342eafdc07cc934660f5a5b","Type":"ContainerStarted","Data":"40a7e473503d3802a9e08e822d1caac8e10c07a3b3a5f081fe5ff1c2d66a82f9"} Mar 20 08:56:56.397854 master-0 kubenswrapper[27820]: I0320 08:56:56.397794 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"81441e014342eafdc07cc934660f5a5b","Type":"ContainerStarted","Data":"70f9448c9b8b9610c39df70159144e313978dcb5943bc51734e1cb65ec299c22"} Mar 20 08:56:56.554082 master-0 kubenswrapper[27820]: I0320 08:56:56.553979 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=3.553956775 podStartE2EDuration="3.553956775s" podCreationTimestamp="2026-03-20 08:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:56.544047756 +0000 UTC m=+426.639256910" watchObservedRunningTime="2026-03-20 08:56:56.553956775 +0000 UTC m=+426.649165939" Mar 20 08:57:03.937474 master-0 kubenswrapper[27820]: I0320 08:57:03.937387 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:03.937474 master-0 kubenswrapper[27820]: I0320 08:57:03.937475 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:03.938628 master-0 kubenswrapper[27820]: I0320 08:57:03.937499 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:03.938628 master-0 kubenswrapper[27820]: I0320 08:57:03.937517 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:03.945019 master-0 kubenswrapper[27820]: I0320 08:57:03.944963 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:03.947406 master-0 kubenswrapper[27820]: I0320 08:57:03.947069 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:04.466730 master-0 kubenswrapper[27820]: I0320 08:57:04.466678 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:04.466975 master-0 kubenswrapper[27820]: I0320 08:57:04.466923 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: I0320 08:57:14.648903 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2"] Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: E0320 08:57:14.649187 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c15c64-0760-4f92-93f4-294b46732974" containerName="installer" Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: I0320 08:57:14.649198 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c15c64-0760-4f92-93f4-294b46732974" containerName="installer" Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: E0320 08:57:14.649219 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerName="console" Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: I0320 08:57:14.649225 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerName="console" Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: E0320 08:57:14.649248 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04466971-127b-403e-af45-dad97b6e0c87" containerName="metrics-server" Mar 20 08:57:14.649285 master-0 kubenswrapper[27820]: I0320 08:57:14.649254 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="04466971-127b-403e-af45-dad97b6e0c87" containerName="metrics-server" Mar 20 08:57:14.649990 master-0 kubenswrapper[27820]: I0320 08:57:14.649384 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c15c64-0760-4f92-93f4-294b46732974" containerName="installer" Mar 20 08:57:14.649990 master-0 kubenswrapper[27820]: I0320 08:57:14.649416 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="04466971-127b-403e-af45-dad97b6e0c87" containerName="metrics-server" Mar 20 08:57:14.649990 master-0 kubenswrapper[27820]: I0320 08:57:14.649430 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="56f17a00-2a28-4406-84ed-40a2a5eecd15" containerName="console" Mar 20 08:57:14.653287 master-0 kubenswrapper[27820]: I0320 08:57:14.651537 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.661288 master-0 kubenswrapper[27820]: I0320 08:57:14.656644 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-6t7cj" Mar 20 08:57:14.669560 master-0 kubenswrapper[27820]: I0320 08:57:14.665547 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx"] Mar 20 08:57:14.669560 master-0 kubenswrapper[27820]: I0320 08:57:14.666925 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.690250 master-0 kubenswrapper[27820]: I0320 08:57:14.675944 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g"] Mar 20 08:57:14.690250 master-0 kubenswrapper[27820]: I0320 08:57:14.677474 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.690250 master-0 kubenswrapper[27820]: I0320 08:57:14.679774 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5"] Mar 20 08:57:14.690250 master-0 kubenswrapper[27820]: I0320 08:57:14.681482 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.690250 master-0 kubenswrapper[27820]: I0320 08:57:14.688759 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g"] Mar 20 08:57:14.701444 master-0 kubenswrapper[27820]: I0320 08:57:14.699247 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5"] Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717608 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6v7\" (UniqueName: \"kubernetes.io/projected/06e0c10c-0597-4853-81b3-e99c4b1866fd-kube-api-access-2f6v7\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717664 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfx5j\" (UniqueName: \"kubernetes.io/projected/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-kube-api-access-lfx5j\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717696 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717729 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717814 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717876 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717903 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txjjd\" (UniqueName: \"kubernetes.io/projected/50af45f7-5df7-41a1-a8be-eec1d938b94b-kube-api-access-txjjd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717926 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717955 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717976 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nbr4\" (UniqueName: \"kubernetes.io/projected/cdbca713-b71e-4336-b696-2cff1689bd10-kube-api-access-8nbr4\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.717998 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.718101 master-0 kubenswrapper[27820]: I0320 08:57:14.718048 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.744329 master-0 kubenswrapper[27820]: I0320 08:57:14.742398 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2"] Mar 20 08:57:14.815661 master-0 kubenswrapper[27820]: I0320 08:57:14.813385 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx"] Mar 20 08:57:14.822443 master-0 kubenswrapper[27820]: I0320 08:57:14.822246 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.822443 master-0 kubenswrapper[27820]: I0320 08:57:14.822313 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8nbr4\" (UniqueName: \"kubernetes.io/projected/cdbca713-b71e-4336-b696-2cff1689bd10-kube-api-access-8nbr4\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.822443 master-0 kubenswrapper[27820]: I0320 08:57:14.822339 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.823670 master-0 kubenswrapper[27820]: I0320 08:57:14.823585 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.823670 master-0 kubenswrapper[27820]: I0320 08:57:14.823689 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6v7\" (UniqueName: \"kubernetes.io/projected/06e0c10c-0597-4853-81b3-e99c4b1866fd-kube-api-access-2f6v7\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823716 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfx5j\" (UniqueName: \"kubernetes.io/projected/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-kube-api-access-lfx5j\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823737 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823770 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823821 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823883 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823893 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823907 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txjjd\" (UniqueName: \"kubernetes.io/projected/50af45f7-5df7-41a1-a8be-eec1d938b94b-kube-api-access-txjjd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.824025 master-0 kubenswrapper[27820]: I0320 08:57:14.823960 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.825007 master-0 kubenswrapper[27820]: I0320 08:57:14.823595 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.825007 master-0 kubenswrapper[27820]: I0320 08:57:14.824099 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.845357 master-0 kubenswrapper[27820]: I0320 08:57:14.838958 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.845357 master-0 kubenswrapper[27820]: I0320 08:57:14.839367 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.845357 master-0 kubenswrapper[27820]: I0320 08:57:14.839677 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.845357 master-0 kubenswrapper[27820]: I0320 08:57:14.839829 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.845357 master-0 kubenswrapper[27820]: I0320 08:57:14.840121 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.850059 master-0 kubenswrapper[27820]: I0320 08:57:14.850015 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txjjd\" (UniqueName: \"kubernetes.io/projected/50af45f7-5df7-41a1-a8be-eec1d938b94b-kube-api-access-txjjd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:14.850179 master-0 kubenswrapper[27820]: I0320 08:57:14.850056 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfx5j\" (UniqueName: \"kubernetes.io/projected/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-kube-api-access-lfx5j\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:14.853895 master-0 kubenswrapper[27820]: I0320 08:57:14.853821 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nbr4\" (UniqueName: \"kubernetes.io/projected/cdbca713-b71e-4336-b696-2cff1689bd10-kube-api-access-8nbr4\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.878088 master-0 kubenswrapper[27820]: I0320 08:57:14.878016 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6v7\" (UniqueName: \"kubernetes.io/projected/06e0c10c-0597-4853-81b3-e99c4b1866fd-kube-api-access-2f6v7\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:14.981584 master-0 kubenswrapper[27820]: I0320 08:57:14.981443 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:14.998195 master-0 kubenswrapper[27820]: I0320 08:57:14.997377 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:15.051281 master-0 kubenswrapper[27820]: I0320 08:57:15.051219 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:15.125134 master-0 kubenswrapper[27820]: I0320 08:57:15.124715 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:15.431905 master-0 kubenswrapper[27820]: I0320 08:57:15.431840 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2"] Mar 20 08:57:15.522926 master-0 kubenswrapper[27820]: I0320 08:57:15.521273 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g"] Mar 20 08:57:15.529170 master-0 kubenswrapper[27820]: I0320 08:57:15.529119 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx"] Mar 20 08:57:15.560671 master-0 kubenswrapper[27820]: I0320 08:57:15.559368 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" event={"ID":"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f","Type":"ContainerStarted","Data":"1e6eb5f053a375021d06ae44bdba2aeb5cabf8e55957489b91a551873fc701a5"} Mar 20 08:57:15.565474 master-0 kubenswrapper[27820]: I0320 08:57:15.565422 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" event={"ID":"06e0c10c-0597-4853-81b3-e99c4b1866fd","Type":"ContainerStarted","Data":"af32aaa6f4395e0d3eaa3915cda1094d66811bb763a7d5cba900136451926ab5"} Mar 20 08:57:15.569192 master-0 kubenswrapper[27820]: I0320 08:57:15.568094 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" event={"ID":"cdbca713-b71e-4336-b696-2cff1689bd10","Type":"ContainerStarted","Data":"b458af10b32b85f22bf3978cfe2a5463564f5feedea68dda43cedab13ed6706e"} Mar 20 08:57:15.685586 master-0 kubenswrapper[27820]: I0320 08:57:15.685416 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5"] Mar 20 08:57:16.576481 master-0 kubenswrapper[27820]: I0320 08:57:16.576290 27820 generic.go:334] "Generic (PLEG): container finished" podID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerID="a59af5b5d66687044785b19ff90d2f2de3725679f00eb0128437b800d2d97f80" exitCode=0 Mar 20 08:57:16.576481 master-0 kubenswrapper[27820]: I0320 08:57:16.576367 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" event={"ID":"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f","Type":"ContainerDied","Data":"a59af5b5d66687044785b19ff90d2f2de3725679f00eb0128437b800d2d97f80"} Mar 20 08:57:16.579380 master-0 kubenswrapper[27820]: I0320 08:57:16.579329 27820 generic.go:334] "Generic (PLEG): container finished" podID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerID="4dd3aa3e29ba4d3bd02b58a2617729bb7ca61a7a8d4341b20f0c6abf3e59369b" exitCode=0 Mar 20 08:57:16.579597 master-0 kubenswrapper[27820]: I0320 08:57:16.579392 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" event={"ID":"06e0c10c-0597-4853-81b3-e99c4b1866fd","Type":"ContainerDied","Data":"4dd3aa3e29ba4d3bd02b58a2617729bb7ca61a7a8d4341b20f0c6abf3e59369b"} Mar 20 08:57:16.583810 master-0 kubenswrapper[27820]: I0320 08:57:16.583741 27820 generic.go:334] "Generic (PLEG): container finished" podID="cdbca713-b71e-4336-b696-2cff1689bd10" containerID="6e5de345e6eefb9116f3055bd2e996f56f54f99cc1bf52cd5bc163be13229d02" exitCode=0 Mar 20 08:57:16.584143 master-0 kubenswrapper[27820]: I0320 08:57:16.583820 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" event={"ID":"cdbca713-b71e-4336-b696-2cff1689bd10","Type":"ContainerDied","Data":"6e5de345e6eefb9116f3055bd2e996f56f54f99cc1bf52cd5bc163be13229d02"} Mar 20 08:57:16.587166 master-0 kubenswrapper[27820]: I0320 08:57:16.587121 27820 generic.go:334] "Generic (PLEG): container finished" podID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerID="7bb1a65c8d3670def97938552d974ef337aa0ca70aa5387bb9ddb79f01fd4b63" exitCode=0 Mar 20 08:57:16.587337 master-0 kubenswrapper[27820]: I0320 08:57:16.587185 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" event={"ID":"50af45f7-5df7-41a1-a8be-eec1d938b94b","Type":"ContainerDied","Data":"7bb1a65c8d3670def97938552d974ef337aa0ca70aa5387bb9ddb79f01fd4b63"} Mar 20 08:57:16.587337 master-0 kubenswrapper[27820]: I0320 08:57:16.587214 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" event={"ID":"50af45f7-5df7-41a1-a8be-eec1d938b94b","Type":"ContainerStarted","Data":"381f9800c0a9998298101c50cf682535f89d8b2aa6aa5e3895e12404883c7713"} Mar 20 08:57:18.603295 master-0 kubenswrapper[27820]: I0320 08:57:18.603214 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" event={"ID":"06e0c10c-0597-4853-81b3-e99c4b1866fd","Type":"ContainerStarted","Data":"89adfb9b5c79341982500bf1f0dc57817ab9f57b4dba4d05c98fd0fe27ec38a5"} Mar 20 08:57:18.605259 master-0 kubenswrapper[27820]: I0320 08:57:18.605223 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" event={"ID":"cdbca713-b71e-4336-b696-2cff1689bd10","Type":"ContainerStarted","Data":"523ebfa855b9e62b5e9898e8ec3f565b3d651b3586d4d0944c69cbcbdcf2b7a7"} Mar 20 08:57:19.617061 master-0 kubenswrapper[27820]: I0320 08:57:19.616984 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" event={"ID":"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f","Type":"ContainerStarted","Data":"00525a48c040854ce3deb9db5a0399c1c8a401eecef6f99011e4b6de19b92116"} Mar 20 08:57:19.619779 master-0 kubenswrapper[27820]: I0320 08:57:19.619431 27820 generic.go:334] "Generic (PLEG): container finished" podID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerID="89adfb9b5c79341982500bf1f0dc57817ab9f57b4dba4d05c98fd0fe27ec38a5" exitCode=0 Mar 20 08:57:19.619779 master-0 kubenswrapper[27820]: I0320 08:57:19.619473 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" event={"ID":"06e0c10c-0597-4853-81b3-e99c4b1866fd","Type":"ContainerDied","Data":"89adfb9b5c79341982500bf1f0dc57817ab9f57b4dba4d05c98fd0fe27ec38a5"} Mar 20 08:57:19.626509 master-0 kubenswrapper[27820]: I0320 08:57:19.624169 27820 generic.go:334] "Generic (PLEG): container finished" podID="cdbca713-b71e-4336-b696-2cff1689bd10" containerID="523ebfa855b9e62b5e9898e8ec3f565b3d651b3586d4d0944c69cbcbdcf2b7a7" exitCode=0 Mar 20 08:57:19.626509 master-0 kubenswrapper[27820]: I0320 08:57:19.624280 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" event={"ID":"cdbca713-b71e-4336-b696-2cff1689bd10","Type":"ContainerDied","Data":"523ebfa855b9e62b5e9898e8ec3f565b3d651b3586d4d0944c69cbcbdcf2b7a7"} Mar 20 08:57:19.628146 master-0 kubenswrapper[27820]: I0320 08:57:19.628094 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" event={"ID":"50af45f7-5df7-41a1-a8be-eec1d938b94b","Type":"ContainerStarted","Data":"5984addabbba7a1c7ffa727c6d466ebad5768a9f003d0828515d7ab02d8a0e7a"} Mar 20 08:57:20.643279 master-0 kubenswrapper[27820]: I0320 08:57:20.643205 27820 generic.go:334] "Generic (PLEG): container finished" podID="cdbca713-b71e-4336-b696-2cff1689bd10" containerID="9dff62c537a468fd0286475cddbb166b655bc8c76d3bf3cefe311956c4a6ab33" exitCode=0 Mar 20 08:57:20.643892 master-0 kubenswrapper[27820]: I0320 08:57:20.643322 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" event={"ID":"cdbca713-b71e-4336-b696-2cff1689bd10","Type":"ContainerDied","Data":"9dff62c537a468fd0286475cddbb166b655bc8c76d3bf3cefe311956c4a6ab33"} Mar 20 08:57:20.646389 master-0 kubenswrapper[27820]: I0320 08:57:20.646344 27820 generic.go:334] "Generic (PLEG): container finished" podID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerID="5984addabbba7a1c7ffa727c6d466ebad5768a9f003d0828515d7ab02d8a0e7a" exitCode=0 Mar 20 08:57:20.646491 master-0 kubenswrapper[27820]: I0320 08:57:20.646468 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" event={"ID":"50af45f7-5df7-41a1-a8be-eec1d938b94b","Type":"ContainerDied","Data":"5984addabbba7a1c7ffa727c6d466ebad5768a9f003d0828515d7ab02d8a0e7a"} Mar 20 08:57:20.655426 master-0 kubenswrapper[27820]: I0320 08:57:20.655360 27820 generic.go:334] "Generic (PLEG): container finished" podID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerID="00525a48c040854ce3deb9db5a0399c1c8a401eecef6f99011e4b6de19b92116" exitCode=0 Mar 20 08:57:20.655561 master-0 kubenswrapper[27820]: I0320 08:57:20.655462 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" event={"ID":"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f","Type":"ContainerDied","Data":"00525a48c040854ce3deb9db5a0399c1c8a401eecef6f99011e4b6de19b92116"} Mar 20 08:57:20.660220 master-0 kubenswrapper[27820]: I0320 08:57:20.660169 27820 generic.go:334] "Generic (PLEG): container finished" podID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerID="881cb0be7658903b85b1af779349f02d0fd2fe64d6d16de405abad8239b1a776" exitCode=0 Mar 20 08:57:20.660220 master-0 kubenswrapper[27820]: I0320 08:57:20.660217 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" event={"ID":"06e0c10c-0597-4853-81b3-e99c4b1866fd","Type":"ContainerDied","Data":"881cb0be7658903b85b1af779349f02d0fd2fe64d6d16de405abad8239b1a776"} Mar 20 08:57:21.676308 master-0 kubenswrapper[27820]: I0320 08:57:21.676170 27820 generic.go:334] "Generic (PLEG): container finished" podID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerID="c16531b214bc60da488a4d531833203715cba9c5794ccb0ac0bf4c409dd8683a" exitCode=0 Mar 20 08:57:21.676308 master-0 kubenswrapper[27820]: I0320 08:57:21.676308 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" event={"ID":"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f","Type":"ContainerDied","Data":"c16531b214bc60da488a4d531833203715cba9c5794ccb0ac0bf4c409dd8683a"} Mar 20 08:57:21.680622 master-0 kubenswrapper[27820]: I0320 08:57:21.680558 27820 generic.go:334] "Generic (PLEG): container finished" podID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerID="ff9102039420c4f082577fa7ef15c600d09b38b10acf50ed1e39f03bfd601f25" exitCode=0 Mar 20 08:57:21.680803 master-0 kubenswrapper[27820]: I0320 08:57:21.680632 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" event={"ID":"50af45f7-5df7-41a1-a8be-eec1d938b94b","Type":"ContainerDied","Data":"ff9102039420c4f082577fa7ef15c600d09b38b10acf50ed1e39f03bfd601f25"} Mar 20 08:57:22.085086 master-0 kubenswrapper[27820]: I0320 08:57:22.085015 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:22.088596 master-0 kubenswrapper[27820]: I0320 08:57:22.088513 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:22.169521 master-0 kubenswrapper[27820]: I0320 08:57:22.169415 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-bundle\") pod \"cdbca713-b71e-4336-b696-2cff1689bd10\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " Mar 20 08:57:22.169865 master-0 kubenswrapper[27820]: I0320 08:57:22.169583 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2f6v7\" (UniqueName: \"kubernetes.io/projected/06e0c10c-0597-4853-81b3-e99c4b1866fd-kube-api-access-2f6v7\") pod \"06e0c10c-0597-4853-81b3-e99c4b1866fd\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " Mar 20 08:57:22.169865 master-0 kubenswrapper[27820]: I0320 08:57:22.169691 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-util\") pod \"cdbca713-b71e-4336-b696-2cff1689bd10\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " Mar 20 08:57:22.169865 master-0 kubenswrapper[27820]: I0320 08:57:22.169738 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nbr4\" (UniqueName: \"kubernetes.io/projected/cdbca713-b71e-4336-b696-2cff1689bd10-kube-api-access-8nbr4\") pod \"cdbca713-b71e-4336-b696-2cff1689bd10\" (UID: \"cdbca713-b71e-4336-b696-2cff1689bd10\") " Mar 20 08:57:22.169865 master-0 kubenswrapper[27820]: I0320 08:57:22.169766 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-bundle\") pod \"06e0c10c-0597-4853-81b3-e99c4b1866fd\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " Mar 20 08:57:22.169865 master-0 kubenswrapper[27820]: I0320 08:57:22.169800 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-util\") pod \"06e0c10c-0597-4853-81b3-e99c4b1866fd\" (UID: \"06e0c10c-0597-4853-81b3-e99c4b1866fd\") " Mar 20 08:57:22.171902 master-0 kubenswrapper[27820]: I0320 08:57:22.171706 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-bundle" (OuterVolumeSpecName: "bundle") pod "cdbca713-b71e-4336-b696-2cff1689bd10" (UID: "cdbca713-b71e-4336-b696-2cff1689bd10"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22.173684 master-0 kubenswrapper[27820]: I0320 08:57:22.173635 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbca713-b71e-4336-b696-2cff1689bd10-kube-api-access-8nbr4" (OuterVolumeSpecName: "kube-api-access-8nbr4") pod "cdbca713-b71e-4336-b696-2cff1689bd10" (UID: "cdbca713-b71e-4336-b696-2cff1689bd10"). InnerVolumeSpecName "kube-api-access-8nbr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:22.177370 master-0 kubenswrapper[27820]: I0320 08:57:22.177283 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06e0c10c-0597-4853-81b3-e99c4b1866fd-kube-api-access-2f6v7" (OuterVolumeSpecName: "kube-api-access-2f6v7") pod "06e0c10c-0597-4853-81b3-e99c4b1866fd" (UID: "06e0c10c-0597-4853-81b3-e99c4b1866fd"). InnerVolumeSpecName "kube-api-access-2f6v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:22.179489 master-0 kubenswrapper[27820]: I0320 08:57:22.179421 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-bundle" (OuterVolumeSpecName: "bundle") pod "06e0c10c-0597-4853-81b3-e99c4b1866fd" (UID: "06e0c10c-0597-4853-81b3-e99c4b1866fd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22.186160 master-0 kubenswrapper[27820]: I0320 08:57:22.186102 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-util" (OuterVolumeSpecName: "util") pod "06e0c10c-0597-4853-81b3-e99c4b1866fd" (UID: "06e0c10c-0597-4853-81b3-e99c4b1866fd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22.191111 master-0 kubenswrapper[27820]: I0320 08:57:22.191072 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-util" (OuterVolumeSpecName: "util") pod "cdbca713-b71e-4336-b696-2cff1689bd10" (UID: "cdbca713-b71e-4336-b696-2cff1689bd10"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22.272226 master-0 kubenswrapper[27820]: I0320 08:57:22.272082 27820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-util\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:22.272226 master-0 kubenswrapper[27820]: I0320 08:57:22.272135 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8nbr4\" (UniqueName: \"kubernetes.io/projected/cdbca713-b71e-4336-b696-2cff1689bd10-kube-api-access-8nbr4\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:22.272226 master-0 kubenswrapper[27820]: I0320 08:57:22.272149 27820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:22.272226 master-0 kubenswrapper[27820]: I0320 08:57:22.272161 27820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/06e0c10c-0597-4853-81b3-e99c4b1866fd-util\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:22.272226 master-0 kubenswrapper[27820]: I0320 08:57:22.272173 27820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/cdbca713-b71e-4336-b696-2cff1689bd10-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:22.272226 master-0 kubenswrapper[27820]: I0320 08:57:22.272185 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2f6v7\" (UniqueName: \"kubernetes.io/projected/06e0c10c-0597-4853-81b3-e99c4b1866fd-kube-api-access-2f6v7\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:22.692476 master-0 kubenswrapper[27820]: I0320 08:57:22.692384 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" event={"ID":"cdbca713-b71e-4336-b696-2cff1689bd10","Type":"ContainerDied","Data":"b458af10b32b85f22bf3978cfe2a5463564f5feedea68dda43cedab13ed6706e"} Mar 20 08:57:22.692476 master-0 kubenswrapper[27820]: I0320 08:57:22.692468 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b458af10b32b85f22bf3978cfe2a5463564f5feedea68dda43cedab13ed6706e" Mar 20 08:57:22.692476 master-0 kubenswrapper[27820]: I0320 08:57:22.692398 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1tprb2" Mar 20 08:57:22.698401 master-0 kubenswrapper[27820]: I0320 08:57:22.698338 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" event={"ID":"06e0c10c-0597-4853-81b3-e99c4b1866fd","Type":"ContainerDied","Data":"af32aaa6f4395e0d3eaa3915cda1094d66811bb763a7d5cba900136451926ab5"} Mar 20 08:57:22.698603 master-0 kubenswrapper[27820]: I0320 08:57:22.698409 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af32aaa6f4395e0d3eaa3915cda1094d66811bb763a7d5cba900136451926ab5" Mar 20 08:57:22.698603 master-0 kubenswrapper[27820]: I0320 08:57:22.698427 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726s5wfx" Mar 20 08:57:23.047347 master-0 kubenswrapper[27820]: I0320 08:57:23.047315 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:23.085961 master-0 kubenswrapper[27820]: I0320 08:57:23.085915 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-util\") pod \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " Mar 20 08:57:23.086420 master-0 kubenswrapper[27820]: I0320 08:57:23.086099 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-bundle\") pod \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " Mar 20 08:57:23.086420 master-0 kubenswrapper[27820]: I0320 08:57:23.086144 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfx5j\" (UniqueName: \"kubernetes.io/projected/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-kube-api-access-lfx5j\") pod \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\" (UID: \"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f\") " Mar 20 08:57:23.088291 master-0 kubenswrapper[27820]: I0320 08:57:23.086828 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-bundle" (OuterVolumeSpecName: "bundle") pod "b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" (UID: "b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:23.089197 master-0 kubenswrapper[27820]: I0320 08:57:23.089152 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-kube-api-access-lfx5j" (OuterVolumeSpecName: "kube-api-access-lfx5j") pod "b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" (UID: "b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f"). InnerVolumeSpecName "kube-api-access-lfx5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:23.101659 master-0 kubenswrapper[27820]: I0320 08:57:23.101608 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-util" (OuterVolumeSpecName: "util") pod "b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" (UID: "b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:23.134994 master-0 kubenswrapper[27820]: I0320 08:57:23.134956 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:23.187464 master-0 kubenswrapper[27820]: I0320 08:57:23.187401 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txjjd\" (UniqueName: \"kubernetes.io/projected/50af45f7-5df7-41a1-a8be-eec1d938b94b-kube-api-access-txjjd\") pod \"50af45f7-5df7-41a1-a8be-eec1d938b94b\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " Mar 20 08:57:23.187713 master-0 kubenswrapper[27820]: I0320 08:57:23.187595 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-bundle\") pod \"50af45f7-5df7-41a1-a8be-eec1d938b94b\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " Mar 20 08:57:23.187754 master-0 kubenswrapper[27820]: I0320 08:57:23.187720 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-util\") pod \"50af45f7-5df7-41a1-a8be-eec1d938b94b\" (UID: \"50af45f7-5df7-41a1-a8be-eec1d938b94b\") " Mar 20 08:57:23.188111 master-0 kubenswrapper[27820]: I0320 08:57:23.188090 27820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-util\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:23.188171 master-0 kubenswrapper[27820]: I0320 08:57:23.188111 27820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:23.188171 master-0 kubenswrapper[27820]: I0320 08:57:23.188126 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfx5j\" (UniqueName: \"kubernetes.io/projected/b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f-kube-api-access-lfx5j\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:23.188858 master-0 kubenswrapper[27820]: I0320 08:57:23.188833 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-bundle" (OuterVolumeSpecName: "bundle") pod "50af45f7-5df7-41a1-a8be-eec1d938b94b" (UID: "50af45f7-5df7-41a1-a8be-eec1d938b94b"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:23.190656 master-0 kubenswrapper[27820]: I0320 08:57:23.190358 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50af45f7-5df7-41a1-a8be-eec1d938b94b-kube-api-access-txjjd" (OuterVolumeSpecName: "kube-api-access-txjjd") pod "50af45f7-5df7-41a1-a8be-eec1d938b94b" (UID: "50af45f7-5df7-41a1-a8be-eec1d938b94b"). InnerVolumeSpecName "kube-api-access-txjjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:23.195970 master-0 kubenswrapper[27820]: I0320 08:57:23.195934 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-util" (OuterVolumeSpecName: "util") pod "50af45f7-5df7-41a1-a8be-eec1d938b94b" (UID: "50af45f7-5df7-41a1-a8be-eec1d938b94b"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:23.289671 master-0 kubenswrapper[27820]: I0320 08:57:23.289492 27820 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-util\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:23.289671 master-0 kubenswrapper[27820]: I0320 08:57:23.289570 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txjjd\" (UniqueName: \"kubernetes.io/projected/50af45f7-5df7-41a1-a8be-eec1d938b94b-kube-api-access-txjjd\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:23.289671 master-0 kubenswrapper[27820]: I0320 08:57:23.289586 27820 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/50af45f7-5df7-41a1-a8be-eec1d938b94b-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:57:23.712537 master-0 kubenswrapper[27820]: I0320 08:57:23.712413 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" event={"ID":"50af45f7-5df7-41a1-a8be-eec1d938b94b","Type":"ContainerDied","Data":"381f9800c0a9998298101c50cf682535f89d8b2aa6aa5e3895e12404883c7713"} Mar 20 08:57:23.712537 master-0 kubenswrapper[27820]: I0320 08:57:23.712471 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5gzkc5" Mar 20 08:57:23.713559 master-0 kubenswrapper[27820]: I0320 08:57:23.712476 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="381f9800c0a9998298101c50cf682535f89d8b2aa6aa5e3895e12404883c7713" Mar 20 08:57:23.716751 master-0 kubenswrapper[27820]: I0320 08:57:23.716691 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" event={"ID":"b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f","Type":"ContainerDied","Data":"1e6eb5f053a375021d06ae44bdba2aeb5cabf8e55957489b91a551873fc701a5"} Mar 20 08:57:23.716859 master-0 kubenswrapper[27820]: I0320 08:57:23.716767 27820 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e6eb5f053a375021d06ae44bdba2aeb5cabf8e55957489b91a551873fc701a5" Mar 20 08:57:23.716859 master-0 kubenswrapper[27820]: I0320 08:57:23.716802 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde8745rb6g" Mar 20 08:57:26.855396 master-0 kubenswrapper[27820]: I0320 08:57:26.855328 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj"] Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855603 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855615 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855633 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855639 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855652 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855659 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855668 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855675 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855686 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855692 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855704 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855710 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855717 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855723 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="util" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855741 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855747 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855755 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855762 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855775 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855783 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855796 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855804 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="pull" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: E0320 08:57:26.855824 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855831 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.855994 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbca713-b71e-4336-b696-2cff1689bd10" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.856029 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="50af45f7-5df7-41a1-a8be-eec1d938b94b" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.856040 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="06e0c10c-0597-4853-81b3-e99c4b1866fd" containerName="extract" Mar 20 08:57:26.856169 master-0 kubenswrapper[27820]: I0320 08:57:26.856049 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="b213eb4c-943f-4879-8fe5-9b1e2e2d6a0f" containerName="extract" Mar 20 08:57:26.857470 master-0 kubenswrapper[27820]: I0320 08:57:26.857433 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" Mar 20 08:57:26.862837 master-0 kubenswrapper[27820]: I0320 08:57:26.860928 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 20 08:57:26.862837 master-0 kubenswrapper[27820]: I0320 08:57:26.861579 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 20 08:57:26.876341 master-0 kubenswrapper[27820]: I0320 08:57:26.876287 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj"] Mar 20 08:57:26.953287 master-0 kubenswrapper[27820]: I0320 08:57:26.953145 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc52w\" (UniqueName: \"kubernetes.io/projected/1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f-kube-api-access-tc52w\") pod \"nmstate-operator-796d4cfff4-wv4sj\" (UID: \"1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" Mar 20 08:57:27.054949 master-0 kubenswrapper[27820]: I0320 08:57:27.054888 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tc52w\" (UniqueName: \"kubernetes.io/projected/1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f-kube-api-access-tc52w\") pod \"nmstate-operator-796d4cfff4-wv4sj\" (UID: \"1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" Mar 20 08:57:27.073600 master-0 kubenswrapper[27820]: I0320 08:57:27.073542 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tc52w\" (UniqueName: \"kubernetes.io/projected/1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f-kube-api-access-tc52w\") pod \"nmstate-operator-796d4cfff4-wv4sj\" (UID: \"1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" Mar 20 08:57:27.179609 master-0 kubenswrapper[27820]: I0320 08:57:27.179474 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" Mar 20 08:57:27.576946 master-0 kubenswrapper[27820]: I0320 08:57:27.576416 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj"] Mar 20 08:57:27.581881 master-0 kubenswrapper[27820]: W0320 08:57:27.581808 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f68b1a3_e1e0_47e5_baa6_14c6b8e34e3f.slice/crio-34f26c435662837e19b81fc86b3ef9cb20146383b5fa30e02812f71c03a53297 WatchSource:0}: Error finding container 34f26c435662837e19b81fc86b3ef9cb20146383b5fa30e02812f71c03a53297: Status 404 returned error can't find the container with id 34f26c435662837e19b81fc86b3ef9cb20146383b5fa30e02812f71c03a53297 Mar 20 08:57:27.766493 master-0 kubenswrapper[27820]: I0320 08:57:27.766414 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" event={"ID":"1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f","Type":"ContainerStarted","Data":"34f26c435662837e19b81fc86b3ef9cb20146383b5fa30e02812f71c03a53297"} Mar 20 08:57:31.833140 master-0 kubenswrapper[27820]: I0320 08:57:31.833082 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" event={"ID":"1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f","Type":"ContainerStarted","Data":"4d478fbd40d6fb332b5ae0503754322064f245eb835935fd7b58bf5c7b6c2347"} Mar 20 08:57:31.856459 master-0 kubenswrapper[27820]: I0320 08:57:31.856382 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-wv4sj" podStartSLOduration=2.702403217 podStartE2EDuration="5.856366452s" podCreationTimestamp="2026-03-20 08:57:26 +0000 UTC" firstStartedPulling="2026-03-20 08:57:27.584443332 +0000 UTC m=+457.679652476" lastFinishedPulling="2026-03-20 08:57:30.738406567 +0000 UTC m=+460.833615711" observedRunningTime="2026-03-20 08:57:31.854003248 +0000 UTC m=+461.949212402" watchObservedRunningTime="2026-03-20 08:57:31.856366452 +0000 UTC m=+461.951575596" Mar 20 08:57:36.296571 master-0 kubenswrapper[27820]: I0320 08:57:36.296380 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f"] Mar 20 08:57:36.302858 master-0 kubenswrapper[27820]: I0320 08:57:36.302824 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.307373 master-0 kubenswrapper[27820]: I0320 08:57:36.307345 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 20 08:57:36.307662 master-0 kubenswrapper[27820]: I0320 08:57:36.307647 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 20 08:57:36.310947 master-0 kubenswrapper[27820]: I0320 08:57:36.310892 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f"] Mar 20 08:57:36.420423 master-0 kubenswrapper[27820]: I0320 08:57:36.420335 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdb6w\" (UniqueName: \"kubernetes.io/projected/936b84c7-1556-469b-aaf6-2fbd3d85fe8a-kube-api-access-sdb6w\") pod \"cert-manager-operator-controller-manager-66c8bdd694-4lq5f\" (UID: \"936b84c7-1556-469b-aaf6-2fbd3d85fe8a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.420643 master-0 kubenswrapper[27820]: I0320 08:57:36.420527 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/936b84c7-1556-469b-aaf6-2fbd3d85fe8a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-4lq5f\" (UID: \"936b84c7-1556-469b-aaf6-2fbd3d85fe8a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.521707 master-0 kubenswrapper[27820]: I0320 08:57:36.521651 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdb6w\" (UniqueName: \"kubernetes.io/projected/936b84c7-1556-469b-aaf6-2fbd3d85fe8a-kube-api-access-sdb6w\") pod \"cert-manager-operator-controller-manager-66c8bdd694-4lq5f\" (UID: \"936b84c7-1556-469b-aaf6-2fbd3d85fe8a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.522089 master-0 kubenswrapper[27820]: I0320 08:57:36.522066 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/936b84c7-1556-469b-aaf6-2fbd3d85fe8a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-4lq5f\" (UID: \"936b84c7-1556-469b-aaf6-2fbd3d85fe8a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.522858 master-0 kubenswrapper[27820]: I0320 08:57:36.522797 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/936b84c7-1556-469b-aaf6-2fbd3d85fe8a-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-4lq5f\" (UID: \"936b84c7-1556-469b-aaf6-2fbd3d85fe8a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.540319 master-0 kubenswrapper[27820]: I0320 08:57:36.540238 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdb6w\" (UniqueName: \"kubernetes.io/projected/936b84c7-1556-469b-aaf6-2fbd3d85fe8a-kube-api-access-sdb6w\") pod \"cert-manager-operator-controller-manager-66c8bdd694-4lq5f\" (UID: \"936b84c7-1556-469b-aaf6-2fbd3d85fe8a\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:36.637685 master-0 kubenswrapper[27820]: I0320 08:57:36.637639 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" Mar 20 08:57:37.079684 master-0 kubenswrapper[27820]: I0320 08:57:37.079640 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f"] Mar 20 08:57:37.900294 master-0 kubenswrapper[27820]: I0320 08:57:37.899124 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" event={"ID":"936b84c7-1556-469b-aaf6-2fbd3d85fe8a","Type":"ContainerStarted","Data":"96343bf75b0bb957b1b4f8580fd9e9182ba47fbcd3c237dd77180f78b94d5aae"} Mar 20 08:57:41.942480 master-0 kubenswrapper[27820]: I0320 08:57:41.942415 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" event={"ID":"936b84c7-1556-469b-aaf6-2fbd3d85fe8a","Type":"ContainerStarted","Data":"8b83dc3c66fa14c890c11ce0d5c1698b1db2e39a5179dcc9ccaa9a7d955ca653"} Mar 20 08:57:42.019274 master-0 kubenswrapper[27820]: I0320 08:57:42.019192 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-4lq5f" podStartSLOduration=1.559152512 podStartE2EDuration="6.019172515s" podCreationTimestamp="2026-03-20 08:57:36 +0000 UTC" firstStartedPulling="2026-03-20 08:57:37.075494178 +0000 UTC m=+467.170703322" lastFinishedPulling="2026-03-20 08:57:41.535514181 +0000 UTC m=+471.630723325" observedRunningTime="2026-03-20 08:57:42.004107621 +0000 UTC m=+472.099316775" watchObservedRunningTime="2026-03-20 08:57:42.019172515 +0000 UTC m=+472.114381659" Mar 20 08:57:44.547144 master-0 kubenswrapper[27820]: I0320 08:57:44.547082 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-wptls"] Mar 20 08:57:44.548000 master-0 kubenswrapper[27820]: I0320 08:57:44.547973 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" Mar 20 08:57:44.549943 master-0 kubenswrapper[27820]: I0320 08:57:44.549904 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 20 08:57:44.550923 master-0 kubenswrapper[27820]: I0320 08:57:44.550896 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 20 08:57:44.571155 master-0 kubenswrapper[27820]: I0320 08:57:44.571081 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-wptls"] Mar 20 08:57:44.675285 master-0 kubenswrapper[27820]: I0320 08:57:44.674997 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrttk\" (UniqueName: \"kubernetes.io/projected/dd70ba1c-6a56-40ba-bdbc-25d0479b56c8-kube-api-access-rrttk\") pod \"obo-prometheus-operator-8ff7d675-wptls\" (UID: \"dd70ba1c-6a56-40ba-bdbc-25d0479b56c8\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" Mar 20 08:57:44.776287 master-0 kubenswrapper[27820]: I0320 08:57:44.776199 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrttk\" (UniqueName: \"kubernetes.io/projected/dd70ba1c-6a56-40ba-bdbc-25d0479b56c8-kube-api-access-rrttk\") pod \"obo-prometheus-operator-8ff7d675-wptls\" (UID: \"dd70ba1c-6a56-40ba-bdbc-25d0479b56c8\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" Mar 20 08:57:44.799052 master-0 kubenswrapper[27820]: I0320 08:57:44.798961 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrttk\" (UniqueName: \"kubernetes.io/projected/dd70ba1c-6a56-40ba-bdbc-25d0479b56c8-kube-api-access-rrttk\") pod \"obo-prometheus-operator-8ff7d675-wptls\" (UID: \"dd70ba1c-6a56-40ba-bdbc-25d0479b56c8\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" Mar 20 08:57:44.864224 master-0 kubenswrapper[27820]: I0320 08:57:44.864167 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" Mar 20 08:57:45.072460 master-0 kubenswrapper[27820]: I0320 08:57:45.072336 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t"] Mar 20 08:57:45.075855 master-0 kubenswrapper[27820]: I0320 08:57:45.075802 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.077942 master-0 kubenswrapper[27820]: I0320 08:57:45.077900 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 20 08:57:45.111719 master-0 kubenswrapper[27820]: I0320 08:57:45.102627 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ff38664-87a9-4803-aae6-6c3f31a68cb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t\" (UID: \"7ff38664-87a9-4803-aae6-6c3f31a68cb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.111719 master-0 kubenswrapper[27820]: I0320 08:57:45.102684 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ff38664-87a9-4803-aae6-6c3f31a68cb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t\" (UID: \"7ff38664-87a9-4803-aae6-6c3f31a68cb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.127552 master-0 kubenswrapper[27820]: I0320 08:57:45.123598 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d"] Mar 20 08:57:45.127552 master-0 kubenswrapper[27820]: I0320 08:57:45.124888 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.153906 master-0 kubenswrapper[27820]: I0320 08:57:45.151654 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t"] Mar 20 08:57:45.180483 master-0 kubenswrapper[27820]: I0320 08:57:45.171645 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d"] Mar 20 08:57:45.220285 master-0 kubenswrapper[27820]: I0320 08:57:45.203794 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/744c7bbe-2db8-4667-8e23-aaf4bee66a24-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d\" (UID: \"744c7bbe-2db8-4667-8e23-aaf4bee66a24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.220285 master-0 kubenswrapper[27820]: I0320 08:57:45.203844 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/744c7bbe-2db8-4667-8e23-aaf4bee66a24-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d\" (UID: \"744c7bbe-2db8-4667-8e23-aaf4bee66a24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.220285 master-0 kubenswrapper[27820]: I0320 08:57:45.203883 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ff38664-87a9-4803-aae6-6c3f31a68cb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t\" (UID: \"7ff38664-87a9-4803-aae6-6c3f31a68cb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.220285 master-0 kubenswrapper[27820]: I0320 08:57:45.203902 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ff38664-87a9-4803-aae6-6c3f31a68cb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t\" (UID: \"7ff38664-87a9-4803-aae6-6c3f31a68cb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.220285 master-0 kubenswrapper[27820]: I0320 08:57:45.206570 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ff38664-87a9-4803-aae6-6c3f31a68cb4-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t\" (UID: \"7ff38664-87a9-4803-aae6-6c3f31a68cb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.220285 master-0 kubenswrapper[27820]: I0320 08:57:45.208554 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ff38664-87a9-4803-aae6-6c3f31a68cb4-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t\" (UID: \"7ff38664-87a9-4803-aae6-6c3f31a68cb4\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.305075 master-0 kubenswrapper[27820]: I0320 08:57:45.305018 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/744c7bbe-2db8-4667-8e23-aaf4bee66a24-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d\" (UID: \"744c7bbe-2db8-4667-8e23-aaf4bee66a24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.305389 master-0 kubenswrapper[27820]: I0320 08:57:45.305096 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/744c7bbe-2db8-4667-8e23-aaf4bee66a24-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d\" (UID: \"744c7bbe-2db8-4667-8e23-aaf4bee66a24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.311050 master-0 kubenswrapper[27820]: I0320 08:57:45.310983 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/744c7bbe-2db8-4667-8e23-aaf4bee66a24-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d\" (UID: \"744c7bbe-2db8-4667-8e23-aaf4bee66a24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.315291 master-0 kubenswrapper[27820]: I0320 08:57:45.311698 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/744c7bbe-2db8-4667-8e23-aaf4bee66a24-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d\" (UID: \"744c7bbe-2db8-4667-8e23-aaf4bee66a24\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.414024 master-0 kubenswrapper[27820]: I0320 08:57:45.413971 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-wptls"] Mar 20 08:57:45.434076 master-0 kubenswrapper[27820]: I0320 08:57:45.433994 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" Mar 20 08:57:45.506204 master-0 kubenswrapper[27820]: I0320 08:57:45.503509 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" Mar 20 08:57:45.594924 master-0 kubenswrapper[27820]: I0320 08:57:45.594365 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-q9drt"] Mar 20 08:57:45.595533 master-0 kubenswrapper[27820]: I0320 08:57:45.595328 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.615002 master-0 kubenswrapper[27820]: I0320 08:57:45.609098 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 20 08:57:45.615002 master-0 kubenswrapper[27820]: I0320 08:57:45.612231 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/956c697c-5335-4400-890b-bb8d2a9756d5-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-q9drt\" (UID: \"956c697c-5335-4400-890b-bb8d2a9756d5\") " pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.627017 master-0 kubenswrapper[27820]: I0320 08:57:45.626747 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-q9drt"] Mar 20 08:57:45.642579 master-0 kubenswrapper[27820]: I0320 08:57:45.630938 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp8rn\" (UniqueName: \"kubernetes.io/projected/956c697c-5335-4400-890b-bb8d2a9756d5-kube-api-access-zp8rn\") pod \"observability-operator-6dd7dd855f-q9drt\" (UID: \"956c697c-5335-4400-890b-bb8d2a9756d5\") " pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.642579 master-0 kubenswrapper[27820]: I0320 08:57:45.640205 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-sb8xw"] Mar 20 08:57:45.642579 master-0 kubenswrapper[27820]: I0320 08:57:45.641736 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.647279 master-0 kubenswrapper[27820]: I0320 08:57:45.645622 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 20 08:57:45.647279 master-0 kubenswrapper[27820]: I0320 08:57:45.646971 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 20 08:57:45.647531 master-0 kubenswrapper[27820]: I0320 08:57:45.647473 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-sb8xw"] Mar 20 08:57:45.733310 master-0 kubenswrapper[27820]: I0320 08:57:45.732504 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp8rn\" (UniqueName: \"kubernetes.io/projected/956c697c-5335-4400-890b-bb8d2a9756d5-kube-api-access-zp8rn\") pod \"observability-operator-6dd7dd855f-q9drt\" (UID: \"956c697c-5335-4400-890b-bb8d2a9756d5\") " pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.733310 master-0 kubenswrapper[27820]: I0320 08:57:45.732641 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pbgr\" (UniqueName: \"kubernetes.io/projected/4a0dad68-0868-49b6-a825-466de3548a78-kube-api-access-2pbgr\") pod \"cert-manager-webhook-6888856db4-sb8xw\" (UID: \"4a0dad68-0868-49b6-a825-466de3548a78\") " pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.733310 master-0 kubenswrapper[27820]: I0320 08:57:45.732690 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/956c697c-5335-4400-890b-bb8d2a9756d5-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-q9drt\" (UID: \"956c697c-5335-4400-890b-bb8d2a9756d5\") " pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.733310 master-0 kubenswrapper[27820]: I0320 08:57:45.732731 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a0dad68-0868-49b6-a825-466de3548a78-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-sb8xw\" (UID: \"4a0dad68-0868-49b6-a825-466de3548a78\") " pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.741290 master-0 kubenswrapper[27820]: I0320 08:57:45.740245 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/956c697c-5335-4400-890b-bb8d2a9756d5-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-q9drt\" (UID: \"956c697c-5335-4400-890b-bb8d2a9756d5\") " pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.760955 master-0 kubenswrapper[27820]: I0320 08:57:45.760877 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp8rn\" (UniqueName: \"kubernetes.io/projected/956c697c-5335-4400-890b-bb8d2a9756d5-kube-api-access-zp8rn\") pod \"observability-operator-6dd7dd855f-q9drt\" (UID: \"956c697c-5335-4400-890b-bb8d2a9756d5\") " pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.835721 master-0 kubenswrapper[27820]: I0320 08:57:45.835679 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a0dad68-0868-49b6-a825-466de3548a78-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-sb8xw\" (UID: \"4a0dad68-0868-49b6-a825-466de3548a78\") " pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.836132 master-0 kubenswrapper[27820]: I0320 08:57:45.836115 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pbgr\" (UniqueName: \"kubernetes.io/projected/4a0dad68-0868-49b6-a825-466de3548a78-kube-api-access-2pbgr\") pod \"cert-manager-webhook-6888856db4-sb8xw\" (UID: \"4a0dad68-0868-49b6-a825-466de3548a78\") " pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.905444 master-0 kubenswrapper[27820]: I0320 08:57:45.883880 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4a0dad68-0868-49b6-a825-466de3548a78-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-sb8xw\" (UID: \"4a0dad68-0868-49b6-a825-466de3548a78\") " pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.905444 master-0 kubenswrapper[27820]: I0320 08:57:45.893092 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pbgr\" (UniqueName: \"kubernetes.io/projected/4a0dad68-0868-49b6-a825-466de3548a78-kube-api-access-2pbgr\") pod \"cert-manager-webhook-6888856db4-sb8xw\" (UID: \"4a0dad68-0868-49b6-a825-466de3548a78\") " pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:45.947655 master-0 kubenswrapper[27820]: I0320 08:57:45.945530 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:57:45.998477 master-0 kubenswrapper[27820]: I0320 08:57:45.997728 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" event={"ID":"dd70ba1c-6a56-40ba-bdbc-25d0479b56c8","Type":"ContainerStarted","Data":"1593a5ec7692629f90d0616e4b74e78c668efca4290952c12aa72b20c5a8126e"} Mar 20 08:57:46.019187 master-0 kubenswrapper[27820]: I0320 08:57:46.018286 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:57:46.064830 master-0 kubenswrapper[27820]: I0320 08:57:46.064790 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t"] Mar 20 08:57:46.071992 master-0 kubenswrapper[27820]: I0320 08:57:46.071894 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d"] Mar 20 08:57:46.368289 master-0 kubenswrapper[27820]: I0320 08:57:46.366084 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-9d56b9f9d-lplkg"] Mar 20 08:57:46.382067 master-0 kubenswrapper[27820]: I0320 08:57:46.381870 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-9d56b9f9d-lplkg"] Mar 20 08:57:46.382067 master-0 kubenswrapper[27820]: I0320 08:57:46.382007 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.384684 master-0 kubenswrapper[27820]: I0320 08:57:46.384651 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 20 08:57:46.543130 master-0 kubenswrapper[27820]: I0320 08:57:46.542948 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-q9drt"] Mar 20 08:57:46.572563 master-0 kubenswrapper[27820]: I0320 08:57:46.572518 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5787e9b7-491a-4825-a336-949d4dca2dca-webhook-cert\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.572803 master-0 kubenswrapper[27820]: I0320 08:57:46.572786 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckgzx\" (UniqueName: \"kubernetes.io/projected/5787e9b7-491a-4825-a336-949d4dca2dca-kube-api-access-ckgzx\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.572926 master-0 kubenswrapper[27820]: I0320 08:57:46.572912 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5787e9b7-491a-4825-a336-949d4dca2dca-openshift-service-ca\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.573035 master-0 kubenswrapper[27820]: I0320 08:57:46.573022 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5787e9b7-491a-4825-a336-949d4dca2dca-apiservice-cert\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.647212 master-0 kubenswrapper[27820]: I0320 08:57:46.646937 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-sb8xw"] Mar 20 08:57:46.676376 master-0 kubenswrapper[27820]: I0320 08:57:46.673800 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5787e9b7-491a-4825-a336-949d4dca2dca-openshift-service-ca\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.676376 master-0 kubenswrapper[27820]: I0320 08:57:46.673890 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5787e9b7-491a-4825-a336-949d4dca2dca-apiservice-cert\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.676376 master-0 kubenswrapper[27820]: I0320 08:57:46.673952 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5787e9b7-491a-4825-a336-949d4dca2dca-webhook-cert\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.676376 master-0 kubenswrapper[27820]: I0320 08:57:46.673975 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckgzx\" (UniqueName: \"kubernetes.io/projected/5787e9b7-491a-4825-a336-949d4dca2dca-kube-api-access-ckgzx\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.676376 master-0 kubenswrapper[27820]: I0320 08:57:46.675252 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5787e9b7-491a-4825-a336-949d4dca2dca-openshift-service-ca\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.678452 master-0 kubenswrapper[27820]: I0320 08:57:46.678409 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5787e9b7-491a-4825-a336-949d4dca2dca-apiservice-cert\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.679362 master-0 kubenswrapper[27820]: I0320 08:57:46.679220 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5787e9b7-491a-4825-a336-949d4dca2dca-webhook-cert\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.688327 master-0 kubenswrapper[27820]: I0320 08:57:46.688288 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckgzx\" (UniqueName: \"kubernetes.io/projected/5787e9b7-491a-4825-a336-949d4dca2dca-kube-api-access-ckgzx\") pod \"perses-operator-9d56b9f9d-lplkg\" (UID: \"5787e9b7-491a-4825-a336-949d4dca2dca\") " pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:46.710865 master-0 kubenswrapper[27820]: I0320 08:57:46.708772 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:57:47.007089 master-0 kubenswrapper[27820]: I0320 08:57:47.007011 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" event={"ID":"744c7bbe-2db8-4667-8e23-aaf4bee66a24","Type":"ContainerStarted","Data":"330537f13a7c4018f86fcd4309a5b141ededb64ea738cade58f55c22db6e90ed"} Mar 20 08:57:47.008544 master-0 kubenswrapper[27820]: I0320 08:57:47.008238 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" event={"ID":"7ff38664-87a9-4803-aae6-6c3f31a68cb4","Type":"ContainerStarted","Data":"e1bab8995739871024827c67beab193b155148a16dc6c47699ce97bf8c50c904"} Mar 20 08:57:47.011564 master-0 kubenswrapper[27820]: I0320 08:57:47.011509 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" event={"ID":"956c697c-5335-4400-890b-bb8d2a9756d5","Type":"ContainerStarted","Data":"fd41888c283f76ef6726b96d868491fed04568b45bb7265b21912943ecceba77"} Mar 20 08:57:47.013177 master-0 kubenswrapper[27820]: I0320 08:57:47.013130 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" event={"ID":"4a0dad68-0868-49b6-a825-466de3548a78","Type":"ContainerStarted","Data":"f84b8982de081159a5d9ca8f04484ade830c6e342ff72f435d7c2827446ff56d"} Mar 20 08:57:47.168365 master-0 kubenswrapper[27820]: I0320 08:57:47.168032 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-9d56b9f9d-lplkg"] Mar 20 08:57:47.178384 master-0 kubenswrapper[27820]: W0320 08:57:47.178210 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5787e9b7_491a_4825_a336_949d4dca2dca.slice/crio-fabaf09c106564fdf6ddcc7fa470ad54800840f03fbecc5cea7cbf4ba0bc84b7 WatchSource:0}: Error finding container fabaf09c106564fdf6ddcc7fa470ad54800840f03fbecc5cea7cbf4ba0bc84b7: Status 404 returned error can't find the container with id fabaf09c106564fdf6ddcc7fa470ad54800840f03fbecc5cea7cbf4ba0bc84b7 Mar 20 08:57:48.067286 master-0 kubenswrapper[27820]: I0320 08:57:48.062982 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" event={"ID":"5787e9b7-491a-4825-a336-949d4dca2dca","Type":"ContainerStarted","Data":"fabaf09c106564fdf6ddcc7fa470ad54800840f03fbecc5cea7cbf4ba0bc84b7"} Mar 20 08:57:48.688855 master-0 kubenswrapper[27820]: I0320 08:57:48.688763 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hjj29"] Mar 20 08:57:48.689736 master-0 kubenswrapper[27820]: I0320 08:57:48.689695 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:48.704913 master-0 kubenswrapper[27820]: I0320 08:57:48.704690 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hjj29"] Mar 20 08:57:48.812903 master-0 kubenswrapper[27820]: I0320 08:57:48.812063 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/623dd9f9-be57-431d-a5ae-28be094e138f-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hjj29\" (UID: \"623dd9f9-be57-431d-a5ae-28be094e138f\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:48.812903 master-0 kubenswrapper[27820]: I0320 08:57:48.812111 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbncs\" (UniqueName: \"kubernetes.io/projected/623dd9f9-be57-431d-a5ae-28be094e138f-kube-api-access-wbncs\") pod \"cert-manager-cainjector-5545bd876-hjj29\" (UID: \"623dd9f9-be57-431d-a5ae-28be094e138f\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:48.919242 master-0 kubenswrapper[27820]: I0320 08:57:48.913649 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/623dd9f9-be57-431d-a5ae-28be094e138f-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hjj29\" (UID: \"623dd9f9-be57-431d-a5ae-28be094e138f\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:48.919242 master-0 kubenswrapper[27820]: I0320 08:57:48.913705 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbncs\" (UniqueName: \"kubernetes.io/projected/623dd9f9-be57-431d-a5ae-28be094e138f-kube-api-access-wbncs\") pod \"cert-manager-cainjector-5545bd876-hjj29\" (UID: \"623dd9f9-be57-431d-a5ae-28be094e138f\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:49.325023 master-0 kubenswrapper[27820]: I0320 08:57:49.324949 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbncs\" (UniqueName: \"kubernetes.io/projected/623dd9f9-be57-431d-a5ae-28be094e138f-kube-api-access-wbncs\") pod \"cert-manager-cainjector-5545bd876-hjj29\" (UID: \"623dd9f9-be57-431d-a5ae-28be094e138f\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:49.357907 master-0 kubenswrapper[27820]: I0320 08:57:49.357833 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/623dd9f9-be57-431d-a5ae-28be094e138f-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-hjj29\" (UID: \"623dd9f9-be57-431d-a5ae-28be094e138f\") " pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:49.620715 master-0 kubenswrapper[27820]: I0320 08:57:49.620637 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" Mar 20 08:57:52.758346 master-0 kubenswrapper[27820]: I0320 08:57:52.757101 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb"] Mar 20 08:57:52.759245 master-0 kubenswrapper[27820]: I0320 08:57:52.758653 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.762144 master-0 kubenswrapper[27820]: I0320 08:57:52.762089 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 20 08:57:52.762327 master-0 kubenswrapper[27820]: I0320 08:57:52.762298 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 20 08:57:52.762577 master-0 kubenswrapper[27820]: I0320 08:57:52.762544 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 20 08:57:52.762943 master-0 kubenswrapper[27820]: I0320 08:57:52.762727 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 20 08:57:52.779681 master-0 kubenswrapper[27820]: I0320 08:57:52.779626 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb"] Mar 20 08:57:52.828290 master-0 kubenswrapper[27820]: I0320 08:57:52.825081 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.828290 master-0 kubenswrapper[27820]: I0320 08:57:52.825205 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljsm\" (UniqueName: \"kubernetes.io/projected/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-kube-api-access-6ljsm\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.828290 master-0 kubenswrapper[27820]: I0320 08:57:52.825235 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-webhook-cert\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.930642 master-0 kubenswrapper[27820]: I0320 08:57:52.930580 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljsm\" (UniqueName: \"kubernetes.io/projected/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-kube-api-access-6ljsm\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.930642 master-0 kubenswrapper[27820]: I0320 08:57:52.930644 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-webhook-cert\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.930928 master-0 kubenswrapper[27820]: I0320 08:57:52.930674 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.938292 master-0 kubenswrapper[27820]: I0320 08:57:52.935935 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-webhook-cert\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.938292 master-0 kubenswrapper[27820]: I0320 08:57:52.936031 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-apiservice-cert\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:52.971332 master-0 kubenswrapper[27820]: I0320 08:57:52.968039 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljsm\" (UniqueName: \"kubernetes.io/projected/df2f3936-c47f-46f1-acb5-0af23bf9bf1c-kube-api-access-6ljsm\") pod \"metallb-operator-controller-manager-6485fcfd64-dfmlb\" (UID: \"df2f3936-c47f-46f1-acb5-0af23bf9bf1c\") " pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:53.128285 master-0 kubenswrapper[27820]: I0320 08:57:53.125559 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:57:53.323893 master-0 kubenswrapper[27820]: I0320 08:57:53.323840 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97"] Mar 20 08:57:53.324940 master-0 kubenswrapper[27820]: I0320 08:57:53.324913 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.329334 master-0 kubenswrapper[27820]: I0320 08:57:53.329015 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 20 08:57:53.329334 master-0 kubenswrapper[27820]: I0320 08:57:53.329018 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 08:57:53.344282 master-0 kubenswrapper[27820]: I0320 08:57:53.340899 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlrxc\" (UniqueName: \"kubernetes.io/projected/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-kube-api-access-vlrxc\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.344282 master-0 kubenswrapper[27820]: I0320 08:57:53.340955 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-webhook-cert\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.344282 master-0 kubenswrapper[27820]: I0320 08:57:53.340993 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-apiservice-cert\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.348283 master-0 kubenswrapper[27820]: I0320 08:57:53.344912 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97"] Mar 20 08:57:53.443370 master-0 kubenswrapper[27820]: I0320 08:57:53.442172 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vlrxc\" (UniqueName: \"kubernetes.io/projected/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-kube-api-access-vlrxc\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.443370 master-0 kubenswrapper[27820]: I0320 08:57:53.442227 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-webhook-cert\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.443370 master-0 kubenswrapper[27820]: I0320 08:57:53.442277 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-apiservice-cert\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.448315 master-0 kubenswrapper[27820]: I0320 08:57:53.446209 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-webhook-cert\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.483252 master-0 kubenswrapper[27820]: I0320 08:57:53.482342 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlrxc\" (UniqueName: \"kubernetes.io/projected/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-kube-api-access-vlrxc\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.492759 master-0 kubenswrapper[27820]: I0320 08:57:53.492341 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/21e7b181-5c6b-433e-adf7-ebe2d7b45aa7-apiservice-cert\") pod \"metallb-operator-webhook-server-6465fb44b7-42g97\" (UID: \"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7\") " pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:57:53.674455 master-0 kubenswrapper[27820]: I0320 08:57:53.672980 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:58:03.766302 master-0 kubenswrapper[27820]: I0320 08:58:03.760749 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97"] Mar 20 08:58:03.862873 master-0 kubenswrapper[27820]: I0320 08:58:03.862759 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-hjj29"] Mar 20 08:58:03.870605 master-0 kubenswrapper[27820]: W0320 08:58:03.870550 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf2f3936_c47f_46f1_acb5_0af23bf9bf1c.slice/crio-3a00726430febb59b8b066b886fa887da5d73315354a812cfafd6af5e112b106 WatchSource:0}: Error finding container 3a00726430febb59b8b066b886fa887da5d73315354a812cfafd6af5e112b106: Status 404 returned error can't find the container with id 3a00726430febb59b8b066b886fa887da5d73315354a812cfafd6af5e112b106 Mar 20 08:58:03.882921 master-0 kubenswrapper[27820]: I0320 08:58:03.881451 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb"] Mar 20 08:58:04.313014 master-0 kubenswrapper[27820]: I0320 08:58:04.312874 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" event={"ID":"5787e9b7-491a-4825-a336-949d4dca2dca","Type":"ContainerStarted","Data":"3b1d46375aaccde734b9f571f4c574be2bdc6886ab2e6137490668c4cfe9e22d"} Mar 20 08:58:04.313407 master-0 kubenswrapper[27820]: I0320 08:58:04.313238 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:58:04.314289 master-0 kubenswrapper[27820]: I0320 08:58:04.314242 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" event={"ID":"df2f3936-c47f-46f1-acb5-0af23bf9bf1c","Type":"ContainerStarted","Data":"3a00726430febb59b8b066b886fa887da5d73315354a812cfafd6af5e112b106"} Mar 20 08:58:04.317230 master-0 kubenswrapper[27820]: I0320 08:58:04.316582 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" event={"ID":"956c697c-5335-4400-890b-bb8d2a9756d5","Type":"ContainerStarted","Data":"3b3deb3cdf44714255a4ca1762f06fca3e52e385103e2676d4a84833ad1d7b40"} Mar 20 08:58:04.317230 master-0 kubenswrapper[27820]: I0320 08:58:04.316969 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:58:04.318791 master-0 kubenswrapper[27820]: I0320 08:58:04.318747 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" event={"ID":"4a0dad68-0868-49b6-a825-466de3548a78","Type":"ContainerStarted","Data":"fa736ce84dbaea691d98009c9f4f084feb3048ae50920d9232fa6aff0731cc9e"} Mar 20 08:58:04.318894 master-0 kubenswrapper[27820]: I0320 08:58:04.318824 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:58:04.319085 master-0 kubenswrapper[27820]: I0320 08:58:04.319054 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" Mar 20 08:58:04.320713 master-0 kubenswrapper[27820]: I0320 08:58:04.320682 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" event={"ID":"623dd9f9-be57-431d-a5ae-28be094e138f","Type":"ContainerStarted","Data":"2064a0e62d9335d5953f4220451c9638aacbc5e2a8ee3c1319b0409cf1296ce4"} Mar 20 08:58:04.320806 master-0 kubenswrapper[27820]: I0320 08:58:04.320712 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" event={"ID":"623dd9f9-be57-431d-a5ae-28be094e138f","Type":"ContainerStarted","Data":"0c1f848212c16080fe81c1f4753eecd0c571d6c38ed4faf9730bd5d48307db99"} Mar 20 08:58:04.322461 master-0 kubenswrapper[27820]: I0320 08:58:04.322422 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" event={"ID":"dd70ba1c-6a56-40ba-bdbc-25d0479b56c8","Type":"ContainerStarted","Data":"c360fb1303ffea222763d1faceafe463dd54d6b17836fd2ac29eba215e57148a"} Mar 20 08:58:04.324132 master-0 kubenswrapper[27820]: I0320 08:58:04.323941 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" event={"ID":"7ff38664-87a9-4803-aae6-6c3f31a68cb4","Type":"ContainerStarted","Data":"3acfaee91ab039b3e221698987e7d1a66f5033e15f38d066f46286953bd75d74"} Mar 20 08:58:04.325419 master-0 kubenswrapper[27820]: I0320 08:58:04.325384 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" event={"ID":"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7","Type":"ContainerStarted","Data":"50842ec78aae623053904cc9c59deadf609cc8299530ccd5a3dc8622912e9a59"} Mar 20 08:58:04.327018 master-0 kubenswrapper[27820]: I0320 08:58:04.326977 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" event={"ID":"744c7bbe-2db8-4667-8e23-aaf4bee66a24","Type":"ContainerStarted","Data":"2c11976f6ae0a94a2dc90d5c33b77ae795ea22d5cb35d8c7ce052658f7e5caa1"} Mar 20 08:58:04.346745 master-0 kubenswrapper[27820]: I0320 08:58:04.346644 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" podStartSLOduration=2.342672711 podStartE2EDuration="18.346625264s" podCreationTimestamp="2026-03-20 08:57:46 +0000 UTC" firstStartedPulling="2026-03-20 08:57:47.184790154 +0000 UTC m=+477.279999288" lastFinishedPulling="2026-03-20 08:58:03.188742697 +0000 UTC m=+493.283951841" observedRunningTime="2026-03-20 08:58:04.340586062 +0000 UTC m=+494.435795216" watchObservedRunningTime="2026-03-20 08:58:04.346625264 +0000 UTC m=+494.441834418" Mar 20 08:58:04.368391 master-0 kubenswrapper[27820]: I0320 08:58:04.368306 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-hjj29" podStartSLOduration=16.368291526 podStartE2EDuration="16.368291526s" podCreationTimestamp="2026-03-20 08:57:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:04.365533112 +0000 UTC m=+494.460742256" watchObservedRunningTime="2026-03-20 08:58:04.368291526 +0000 UTC m=+494.463500670" Mar 20 08:58:04.386846 master-0 kubenswrapper[27820]: I0320 08:58:04.386760 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" podStartSLOduration=2.890521814 podStartE2EDuration="19.386734501s" podCreationTimestamp="2026-03-20 08:57:45 +0000 UTC" firstStartedPulling="2026-03-20 08:57:46.65459178 +0000 UTC m=+476.749800924" lastFinishedPulling="2026-03-20 08:58:03.150804467 +0000 UTC m=+493.246013611" observedRunningTime="2026-03-20 08:58:04.384625295 +0000 UTC m=+494.479834449" watchObservedRunningTime="2026-03-20 08:58:04.386734501 +0000 UTC m=+494.481943645" Mar 20 08:58:04.416746 master-0 kubenswrapper[27820]: I0320 08:58:04.414160 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-wptls" podStartSLOduration=2.607125366 podStartE2EDuration="20.414141968s" podCreationTimestamp="2026-03-20 08:57:44 +0000 UTC" firstStartedPulling="2026-03-20 08:57:45.419702904 +0000 UTC m=+475.514912048" lastFinishedPulling="2026-03-20 08:58:03.226719506 +0000 UTC m=+493.321928650" observedRunningTime="2026-03-20 08:58:04.410995053 +0000 UTC m=+494.506204197" watchObservedRunningTime="2026-03-20 08:58:04.414141968 +0000 UTC m=+494.509351112" Mar 20 08:58:04.440147 master-0 kubenswrapper[27820]: I0320 08:58:04.438726 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t" podStartSLOduration=2.237080179 podStartE2EDuration="19.438710928s" podCreationTimestamp="2026-03-20 08:57:45 +0000 UTC" firstStartedPulling="2026-03-20 08:57:46.087862144 +0000 UTC m=+476.183071288" lastFinishedPulling="2026-03-20 08:58:03.289492893 +0000 UTC m=+493.384702037" observedRunningTime="2026-03-20 08:58:04.434200776 +0000 UTC m=+494.529409920" watchObservedRunningTime="2026-03-20 08:58:04.438710928 +0000 UTC m=+494.533920072" Mar 20 08:58:04.478009 master-0 kubenswrapper[27820]: I0320 08:58:04.477911 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-q9drt" podStartSLOduration=2.637909638 podStartE2EDuration="19.477891471s" podCreationTimestamp="2026-03-20 08:57:45 +0000 UTC" firstStartedPulling="2026-03-20 08:57:46.558522809 +0000 UTC m=+476.653731953" lastFinishedPulling="2026-03-20 08:58:03.398504642 +0000 UTC m=+493.493713786" observedRunningTime="2026-03-20 08:58:04.475697571 +0000 UTC m=+494.570906725" watchObservedRunningTime="2026-03-20 08:58:04.477891471 +0000 UTC m=+494.573100615" Mar 20 08:58:04.507719 master-0 kubenswrapper[27820]: I0320 08:58:04.507620 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d" podStartSLOduration=2.370162094 podStartE2EDuration="19.507599059s" podCreationTimestamp="2026-03-20 08:57:45 +0000 UTC" firstStartedPulling="2026-03-20 08:57:46.088239234 +0000 UTC m=+476.183448378" lastFinishedPulling="2026-03-20 08:58:03.225676199 +0000 UTC m=+493.320885343" observedRunningTime="2026-03-20 08:58:04.499951703 +0000 UTC m=+494.595160847" watchObservedRunningTime="2026-03-20 08:58:04.507599059 +0000 UTC m=+494.602808203" Mar 20 08:58:04.721774 master-0 kubenswrapper[27820]: I0320 08:58:04.721704 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-nld4c"] Mar 20 08:58:04.722743 master-0 kubenswrapper[27820]: I0320 08:58:04.722712 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:04.734027 master-0 kubenswrapper[27820]: I0320 08:58:04.733964 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-nld4c"] Mar 20 08:58:04.871598 master-0 kubenswrapper[27820]: I0320 08:58:04.871462 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/184b1066-67c3-4648-b721-ff50069ebd67-bound-sa-token\") pod \"cert-manager-545d4d4674-nld4c\" (UID: \"184b1066-67c3-4648-b721-ff50069ebd67\") " pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:04.872221 master-0 kubenswrapper[27820]: I0320 08:58:04.871642 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5rls\" (UniqueName: \"kubernetes.io/projected/184b1066-67c3-4648-b721-ff50069ebd67-kube-api-access-h5rls\") pod \"cert-manager-545d4d4674-nld4c\" (UID: \"184b1066-67c3-4648-b721-ff50069ebd67\") " pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:04.974131 master-0 kubenswrapper[27820]: I0320 08:58:04.973495 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/184b1066-67c3-4648-b721-ff50069ebd67-bound-sa-token\") pod \"cert-manager-545d4d4674-nld4c\" (UID: \"184b1066-67c3-4648-b721-ff50069ebd67\") " pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:04.974131 master-0 kubenswrapper[27820]: I0320 08:58:04.973561 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5rls\" (UniqueName: \"kubernetes.io/projected/184b1066-67c3-4648-b721-ff50069ebd67-kube-api-access-h5rls\") pod \"cert-manager-545d4d4674-nld4c\" (UID: \"184b1066-67c3-4648-b721-ff50069ebd67\") " pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:04.997966 master-0 kubenswrapper[27820]: I0320 08:58:04.997911 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/184b1066-67c3-4648-b721-ff50069ebd67-bound-sa-token\") pod \"cert-manager-545d4d4674-nld4c\" (UID: \"184b1066-67c3-4648-b721-ff50069ebd67\") " pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:04.999800 master-0 kubenswrapper[27820]: I0320 08:58:04.999767 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5rls\" (UniqueName: \"kubernetes.io/projected/184b1066-67c3-4648-b721-ff50069ebd67-kube-api-access-h5rls\") pod \"cert-manager-545d4d4674-nld4c\" (UID: \"184b1066-67c3-4648-b721-ff50069ebd67\") " pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:05.059035 master-0 kubenswrapper[27820]: I0320 08:58:05.058982 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-nld4c" Mar 20 08:58:05.551530 master-0 kubenswrapper[27820]: I0320 08:58:05.548154 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-nld4c"] Mar 20 08:58:06.346720 master-0 kubenswrapper[27820]: I0320 08:58:06.346626 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-nld4c" event={"ID":"184b1066-67c3-4648-b721-ff50069ebd67","Type":"ContainerStarted","Data":"b6ab6534bed8e19638217c58eccf8e807548cfd862187a119e88f43f4a632a5f"} Mar 20 08:58:06.346720 master-0 kubenswrapper[27820]: I0320 08:58:06.346712 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-nld4c" event={"ID":"184b1066-67c3-4648-b721-ff50069ebd67","Type":"ContainerStarted","Data":"0a80d700a037dc84f9c74ea1a720f8156781d3838cbf8b6710f34aa6194d9830"} Mar 20 08:58:06.365592 master-0 kubenswrapper[27820]: I0320 08:58:06.365477 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-nld4c" podStartSLOduration=2.365453472 podStartE2EDuration="2.365453472s" podCreationTimestamp="2026-03-20 08:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:06.362287867 +0000 UTC m=+496.457497041" watchObservedRunningTime="2026-03-20 08:58:06.365453472 +0000 UTC m=+496.460662636" Mar 20 08:58:09.372990 master-0 kubenswrapper[27820]: I0320 08:58:09.372913 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" event={"ID":"df2f3936-c47f-46f1-acb5-0af23bf9bf1c","Type":"ContainerStarted","Data":"d600637fa222bc7945c0aeea8c1a0bdcadc712de6b209ff19280f41d31cb56d4"} Mar 20 08:58:09.373762 master-0 kubenswrapper[27820]: I0320 08:58:09.373043 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:58:09.374541 master-0 kubenswrapper[27820]: I0320 08:58:09.374502 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" event={"ID":"21e7b181-5c6b-433e-adf7-ebe2d7b45aa7","Type":"ContainerStarted","Data":"1eebe049bda966835a3aa422a2d3013da8b39a779450e536901c8033e290a35a"} Mar 20 08:58:09.374882 master-0 kubenswrapper[27820]: I0320 08:58:09.374850 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:58:09.429681 master-0 kubenswrapper[27820]: I0320 08:58:09.429537 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" podStartSLOduration=12.146157259 podStartE2EDuration="17.429517221s" podCreationTimestamp="2026-03-20 08:57:52 +0000 UTC" firstStartedPulling="2026-03-20 08:58:03.879767691 +0000 UTC m=+493.974976835" lastFinishedPulling="2026-03-20 08:58:09.163127653 +0000 UTC m=+499.258336797" observedRunningTime="2026-03-20 08:58:09.404727555 +0000 UTC m=+499.499936709" watchObservedRunningTime="2026-03-20 08:58:09.429517221 +0000 UTC m=+499.524726365" Mar 20 08:58:09.431920 master-0 kubenswrapper[27820]: I0320 08:58:09.431867 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" podStartSLOduration=11.042687478 podStartE2EDuration="16.431852783s" podCreationTimestamp="2026-03-20 08:57:53 +0000 UTC" firstStartedPulling="2026-03-20 08:58:03.795366054 +0000 UTC m=+493.890575198" lastFinishedPulling="2026-03-20 08:58:09.184531359 +0000 UTC m=+499.279740503" observedRunningTime="2026-03-20 08:58:09.427822685 +0000 UTC m=+499.523031849" watchObservedRunningTime="2026-03-20 08:58:09.431852783 +0000 UTC m=+499.527061927" Mar 20 08:58:11.021240 master-0 kubenswrapper[27820]: I0320 08:58:11.021190 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-sb8xw" Mar 20 08:58:16.712730 master-0 kubenswrapper[27820]: I0320 08:58:16.712616 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-9d56b9f9d-lplkg" Mar 20 08:58:23.677931 master-0 kubenswrapper[27820]: I0320 08:58:23.677839 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6465fb44b7-42g97" Mar 20 08:58:43.128798 master-0 kubenswrapper[27820]: I0320 08:58:43.128732 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6485fcfd64-dfmlb" Mar 20 08:58:50.639367 master-0 kubenswrapper[27820]: I0320 08:58:50.639315 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv"] Mar 20 08:58:50.640337 master-0 kubenswrapper[27820]: I0320 08:58:50.640320 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.645629 master-0 kubenswrapper[27820]: I0320 08:58:50.645411 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 20 08:58:50.654430 master-0 kubenswrapper[27820]: I0320 08:58:50.651964 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-tl7kr"] Mar 20 08:58:50.656986 master-0 kubenswrapper[27820]: I0320 08:58:50.656939 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.661867 master-0 kubenswrapper[27820]: I0320 08:58:50.661577 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 20 08:58:50.661867 master-0 kubenswrapper[27820]: I0320 08:58:50.661711 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 20 08:58:50.685073 master-0 kubenswrapper[27820]: I0320 08:58:50.685033 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-startup\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.685172 master-0 kubenswrapper[27820]: I0320 08:58:50.685090 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-reloader\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.685172 master-0 kubenswrapper[27820]: I0320 08:58:50.685114 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-pn2fv\" (UID: \"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.685172 master-0 kubenswrapper[27820]: I0320 08:58:50.685155 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2854e640-52d9-4a65-8b6c-4bc273a80668-metrics-certs\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.685286 master-0 kubenswrapper[27820]: I0320 08:58:50.685171 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-sockets\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.685286 master-0 kubenswrapper[27820]: I0320 08:58:50.685188 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78gq\" (UniqueName: \"kubernetes.io/projected/bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8-kube-api-access-t78gq\") pod \"frr-k8s-webhook-server-bcc4b6f68-pn2fv\" (UID: \"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.685286 master-0 kubenswrapper[27820]: I0320 08:58:50.685225 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-conf\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.685286 master-0 kubenswrapper[27820]: I0320 08:58:50.685254 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4h7w\" (UniqueName: \"kubernetes.io/projected/2854e640-52d9-4a65-8b6c-4bc273a80668-kube-api-access-m4h7w\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.685406 master-0 kubenswrapper[27820]: I0320 08:58:50.685295 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-metrics\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.696404 master-0 kubenswrapper[27820]: I0320 08:58:50.689111 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv"] Mar 20 08:58:50.731187 master-0 kubenswrapper[27820]: I0320 08:58:50.729444 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-p2fhx"] Mar 20 08:58:50.731187 master-0 kubenswrapper[27820]: I0320 08:58:50.730774 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.739283 master-0 kubenswrapper[27820]: I0320 08:58:50.734003 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 20 08:58:50.739283 master-0 kubenswrapper[27820]: I0320 08:58:50.734218 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 20 08:58:50.739283 master-0 kubenswrapper[27820]: I0320 08:58:50.734385 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 20 08:58:50.758306 master-0 kubenswrapper[27820]: I0320 08:58:50.757295 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-z9frd"] Mar 20 08:58:50.764284 master-0 kubenswrapper[27820]: I0320 08:58:50.758822 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.764284 master-0 kubenswrapper[27820]: I0320 08:58:50.761567 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.783071 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-z9frd"] Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787287 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4h7w\" (UniqueName: \"kubernetes.io/projected/2854e640-52d9-4a65-8b6c-4bc273a80668-kube-api-access-m4h7w\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787337 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metrics-certs\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787359 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-metrics\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787380 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787401 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-startup\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787498 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnxht\" (UniqueName: \"kubernetes.io/projected/51eafdd1-74d2-4441-96c6-e5edd8705e55-kube-api-access-fnxht\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787525 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-reloader\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787540 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metallb-excludel2\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787563 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-pn2fv\" (UID: \"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787587 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-metrics-certs\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787622 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2854e640-52d9-4a65-8b6c-4bc273a80668-metrics-certs\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787636 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-sockets\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787654 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78gq\" (UniqueName: \"kubernetes.io/projected/bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8-kube-api-access-t78gq\") pod \"frr-k8s-webhook-server-bcc4b6f68-pn2fv\" (UID: \"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787681 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rswkw\" (UniqueName: \"kubernetes.io/projected/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-kube-api-access-rswkw\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787698 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-conf\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.787713 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-cert\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.788346 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-metrics\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.790289 master-0 kubenswrapper[27820]: I0320 08:58:50.789035 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-startup\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.799289 master-0 kubenswrapper[27820]: I0320 08:58:50.791720 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-reloader\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.799289 master-0 kubenswrapper[27820]: I0320 08:58:50.794509 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-pn2fv\" (UID: \"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.799289 master-0 kubenswrapper[27820]: I0320 08:58:50.796085 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-sockets\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.799289 master-0 kubenswrapper[27820]: I0320 08:58:50.796716 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/2854e640-52d9-4a65-8b6c-4bc273a80668-frr-conf\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.805314 master-0 kubenswrapper[27820]: I0320 08:58:50.804345 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4h7w\" (UniqueName: \"kubernetes.io/projected/2854e640-52d9-4a65-8b6c-4bc273a80668-kube-api-access-m4h7w\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.806705 master-0 kubenswrapper[27820]: I0320 08:58:50.806624 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2854e640-52d9-4a65-8b6c-4bc273a80668-metrics-certs\") pod \"frr-k8s-tl7kr\" (UID: \"2854e640-52d9-4a65-8b6c-4bc273a80668\") " pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:50.814926 master-0 kubenswrapper[27820]: I0320 08:58:50.814470 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78gq\" (UniqueName: \"kubernetes.io/projected/bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8-kube-api-access-t78gq\") pod \"frr-k8s-webhook-server-bcc4b6f68-pn2fv\" (UID: \"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.888924 master-0 kubenswrapper[27820]: I0320 08:58:50.888753 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-metrics-certs\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.888924 master-0 kubenswrapper[27820]: I0320 08:58:50.888921 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rswkw\" (UniqueName: \"kubernetes.io/projected/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-kube-api-access-rswkw\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.889209 master-0 kubenswrapper[27820]: I0320 08:58:50.888958 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-cert\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.889209 master-0 kubenswrapper[27820]: I0320 08:58:50.889034 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metrics-certs\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.889209 master-0 kubenswrapper[27820]: I0320 08:58:50.889097 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.889209 master-0 kubenswrapper[27820]: I0320 08:58:50.889170 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnxht\" (UniqueName: \"kubernetes.io/projected/51eafdd1-74d2-4441-96c6-e5edd8705e55-kube-api-access-fnxht\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.889209 master-0 kubenswrapper[27820]: I0320 08:58:50.889204 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metallb-excludel2\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.889952 master-0 kubenswrapper[27820]: E0320 08:58:50.889487 27820 secret.go:189] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Mar 20 08:58:50.890466 master-0 kubenswrapper[27820]: E0320 08:58:50.890160 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-metrics-certs podName:51eafdd1-74d2-4441-96c6-e5edd8705e55 nodeName:}" failed. No retries permitted until 2026-03-20 08:58:51.389554786 +0000 UTC m=+541.484763930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-metrics-certs") pod "controller-7bb4cc7c98-z9frd" (UID: "51eafdd1-74d2-4441-96c6-e5edd8705e55") : secret "controller-certs-secret" not found Mar 20 08:58:50.890466 master-0 kubenswrapper[27820]: E0320 08:58:50.890318 27820 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 20 08:58:50.890466 master-0 kubenswrapper[27820]: E0320 08:58:50.890400 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist podName:f01da03b-1f5e-4ade-a4f9-e0dac32eb142 nodeName:}" failed. No retries permitted until 2026-03-20 08:58:51.390380069 +0000 UTC m=+541.485589273 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist") pod "speaker-p2fhx" (UID: "f01da03b-1f5e-4ade-a4f9-e0dac32eb142") : secret "metallb-memberlist" not found Mar 20 08:58:50.890723 master-0 kubenswrapper[27820]: E0320 08:58:50.890703 27820 secret.go:189] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Mar 20 08:58:50.892678 master-0 kubenswrapper[27820]: I0320 08:58:50.892581 27820 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 08:58:50.892984 master-0 kubenswrapper[27820]: I0320 08:58:50.892963 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metallb-excludel2\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.893242 master-0 kubenswrapper[27820]: E0320 08:58:50.893227 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metrics-certs podName:f01da03b-1f5e-4ade-a4f9-e0dac32eb142 nodeName:}" failed. No retries permitted until 2026-03-20 08:58:51.392575397 +0000 UTC m=+541.487784561 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metrics-certs") pod "speaker-p2fhx" (UID: "f01da03b-1f5e-4ade-a4f9-e0dac32eb142") : secret "speaker-certs-secret" not found Mar 20 08:58:50.914046 master-0 kubenswrapper[27820]: I0320 08:58:50.914001 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnxht\" (UniqueName: \"kubernetes.io/projected/51eafdd1-74d2-4441-96c6-e5edd8705e55-kube-api-access-fnxht\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.915318 master-0 kubenswrapper[27820]: I0320 08:58:50.915235 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rswkw\" (UniqueName: \"kubernetes.io/projected/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-kube-api-access-rswkw\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:50.922866 master-0 kubenswrapper[27820]: I0320 08:58:50.922833 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-cert\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:50.976987 master-0 kubenswrapper[27820]: I0320 08:58:50.976918 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:58:50.992102 master-0 kubenswrapper[27820]: I0320 08:58:50.992035 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:58:51.400314 master-0 kubenswrapper[27820]: I0320 08:58:51.400230 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metrics-certs\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:51.400544 master-0 kubenswrapper[27820]: I0320 08:58:51.400338 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:51.400544 master-0 kubenswrapper[27820]: I0320 08:58:51.400403 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-metrics-certs\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:51.401448 master-0 kubenswrapper[27820]: E0320 08:58:51.401342 27820 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 20 08:58:51.401521 master-0 kubenswrapper[27820]: E0320 08:58:51.401510 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist podName:f01da03b-1f5e-4ade-a4f9-e0dac32eb142 nodeName:}" failed. No retries permitted until 2026-03-20 08:58:52.40148633 +0000 UTC m=+542.496695494 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist") pod "speaker-p2fhx" (UID: "f01da03b-1f5e-4ade-a4f9-e0dac32eb142") : secret "metallb-memberlist" not found Mar 20 08:58:51.404001 master-0 kubenswrapper[27820]: I0320 08:58:51.403948 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-metrics-certs\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:51.404508 master-0 kubenswrapper[27820]: I0320 08:58:51.404468 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/51eafdd1-74d2-4441-96c6-e5edd8705e55-metrics-certs\") pod \"controller-7bb4cc7c98-z9frd\" (UID: \"51eafdd1-74d2-4441-96c6-e5edd8705e55\") " pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:51.518954 master-0 kubenswrapper[27820]: I0320 08:58:51.518094 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv"] Mar 20 08:58:51.518954 master-0 kubenswrapper[27820]: W0320 08:58:51.518198 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb8fd2f6_e697_42fa_8e7d_f5737a39f6e8.slice/crio-8a6ba644d2620ba4f002192834f94c249b1262ab14cfeb9766f7b674ab31f185 WatchSource:0}: Error finding container 8a6ba644d2620ba4f002192834f94c249b1262ab14cfeb9766f7b674ab31f185: Status 404 returned error can't find the container with id 8a6ba644d2620ba4f002192834f94c249b1262ab14cfeb9766f7b674ab31f185 Mar 20 08:58:51.685712 master-0 kubenswrapper[27820]: I0320 08:58:51.685599 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:51.747055 master-0 kubenswrapper[27820]: I0320 08:58:51.742584 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" event={"ID":"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8","Type":"ContainerStarted","Data":"8a6ba644d2620ba4f002192834f94c249b1262ab14cfeb9766f7b674ab31f185"} Mar 20 08:58:51.747055 master-0 kubenswrapper[27820]: I0320 08:58:51.744336 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"2cec61cdd5cd856a46e46df6233b1c5a7d4690edba8a5e47ec3a6c78e843715d"} Mar 20 08:58:52.096153 master-0 kubenswrapper[27820]: I0320 08:58:52.096091 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-z9frd"] Mar 20 08:58:52.105257 master-0 kubenswrapper[27820]: W0320 08:58:52.105216 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51eafdd1_74d2_4441_96c6_e5edd8705e55.slice/crio-db0399606aa9999461ef986ed2d15bfb1d5628c376f047f66f8e163e13581383 WatchSource:0}: Error finding container db0399606aa9999461ef986ed2d15bfb1d5628c376f047f66f8e163e13581383: Status 404 returned error can't find the container with id db0399606aa9999461ef986ed2d15bfb1d5628c376f047f66f8e163e13581383 Mar 20 08:58:52.415942 master-0 kubenswrapper[27820]: I0320 08:58:52.415769 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:52.421829 master-0 kubenswrapper[27820]: I0320 08:58:52.421573 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f01da03b-1f5e-4ade-a4f9-e0dac32eb142-memberlist\") pod \"speaker-p2fhx\" (UID: \"f01da03b-1f5e-4ade-a4f9-e0dac32eb142\") " pod="metallb-system/speaker-p2fhx" Mar 20 08:58:52.563509 master-0 kubenswrapper[27820]: I0320 08:58:52.563352 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-p2fhx" Mar 20 08:58:52.599041 master-0 kubenswrapper[27820]: W0320 08:58:52.598976 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf01da03b_1f5e_4ade_a4f9_e0dac32eb142.slice/crio-92e4f6950941a4f03137bfc5ee66e81e9f4eec38177d369f762f639443d583bb WatchSource:0}: Error finding container 92e4f6950941a4f03137bfc5ee66e81e9f4eec38177d369f762f639443d583bb: Status 404 returned error can't find the container with id 92e4f6950941a4f03137bfc5ee66e81e9f4eec38177d369f762f639443d583bb Mar 20 08:58:52.778395 master-0 kubenswrapper[27820]: I0320 08:58:52.778224 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-z9frd" event={"ID":"51eafdd1-74d2-4441-96c6-e5edd8705e55","Type":"ContainerStarted","Data":"927f28d7a97ca71ff132313a3618c5bbfb1664eb4e10fa60932d5b0b8ab8bf26"} Mar 20 08:58:52.778395 master-0 kubenswrapper[27820]: I0320 08:58:52.778318 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-z9frd" event={"ID":"51eafdd1-74d2-4441-96c6-e5edd8705e55","Type":"ContainerStarted","Data":"db0399606aa9999461ef986ed2d15bfb1d5628c376f047f66f8e163e13581383"} Mar 20 08:58:52.779515 master-0 kubenswrapper[27820]: I0320 08:58:52.779489 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p2fhx" event={"ID":"f01da03b-1f5e-4ade-a4f9-e0dac32eb142","Type":"ContainerStarted","Data":"92e4f6950941a4f03137bfc5ee66e81e9f4eec38177d369f762f639443d583bb"} Mar 20 08:58:53.272569 master-0 kubenswrapper[27820]: I0320 08:58:53.272509 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-d25js"] Mar 20 08:58:53.274242 master-0 kubenswrapper[27820]: I0320 08:58:53.274211 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" Mar 20 08:58:53.305763 master-0 kubenswrapper[27820]: I0320 08:58:53.304194 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl"] Mar 20 08:58:53.305763 master-0 kubenswrapper[27820]: I0320 08:58:53.305519 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.311892 master-0 kubenswrapper[27820]: I0320 08:58:53.311844 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 20 08:58:53.322295 master-0 kubenswrapper[27820]: I0320 08:58:53.320518 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7c6kf"] Mar 20 08:58:53.322295 master-0 kubenswrapper[27820]: I0320 08:58:53.321901 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.346903 master-0 kubenswrapper[27820]: I0320 08:58:53.346832 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-nmstate-lock\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.347109 master-0 kubenswrapper[27820]: I0320 08:58:53.346961 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnrwt\" (UniqueName: \"kubernetes.io/projected/4ef5015e-1e99-4f9e-ba7c-59b462ff2188-kube-api-access-mnrwt\") pod \"nmstate-metrics-9b8c8685d-d25js\" (UID: \"4ef5015e-1e99-4f9e-ba7c-59b462ff2188\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" Mar 20 08:58:53.347109 master-0 kubenswrapper[27820]: I0320 08:58:53.347001 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-dbus-socket\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.347109 master-0 kubenswrapper[27820]: I0320 08:58:53.347070 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/58d0dca7-7d2f-4601-95a1-377c982d2d41-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.347109 master-0 kubenswrapper[27820]: I0320 08:58:53.347089 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-ovs-socket\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.347349 master-0 kubenswrapper[27820]: I0320 08:58:53.347116 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9sfd\" (UniqueName: \"kubernetes.io/projected/23a22e55-9f4f-4f31-81c5-328720dee978-kube-api-access-t9sfd\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.347349 master-0 kubenswrapper[27820]: I0320 08:58:53.347223 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx4mw\" (UniqueName: \"kubernetes.io/projected/58d0dca7-7d2f-4601-95a1-377c982d2d41-kube-api-access-cx4mw\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.352862 master-0 kubenswrapper[27820]: I0320 08:58:53.352811 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-d25js"] Mar 20 08:58:53.371094 master-0 kubenswrapper[27820]: I0320 08:58:53.370162 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl"] Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.448826 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cx4mw\" (UniqueName: \"kubernetes.io/projected/58d0dca7-7d2f-4601-95a1-377c982d2d41-kube-api-access-cx4mw\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.448925 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-nmstate-lock\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.448988 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnrwt\" (UniqueName: \"kubernetes.io/projected/4ef5015e-1e99-4f9e-ba7c-59b462ff2188-kube-api-access-mnrwt\") pod \"nmstate-metrics-9b8c8685d-d25js\" (UID: \"4ef5015e-1e99-4f9e-ba7c-59b462ff2188\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.449026 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-dbus-socket\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.449094 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/58d0dca7-7d2f-4601-95a1-377c982d2d41-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.449117 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-ovs-socket\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.449381 master-0 kubenswrapper[27820]: I0320 08:58:53.449153 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9sfd\" (UniqueName: \"kubernetes.io/projected/23a22e55-9f4f-4f31-81c5-328720dee978-kube-api-access-t9sfd\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.449760 master-0 kubenswrapper[27820]: I0320 08:58:53.449717 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-nmstate-lock\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.451687 master-0 kubenswrapper[27820]: I0320 08:58:53.449943 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-dbus-socket\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.451687 master-0 kubenswrapper[27820]: E0320 08:58:53.450024 27820 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 20 08:58:53.451687 master-0 kubenswrapper[27820]: E0320 08:58:53.450075 27820 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58d0dca7-7d2f-4601-95a1-377c982d2d41-tls-key-pair podName:58d0dca7-7d2f-4601-95a1-377c982d2d41 nodeName:}" failed. No retries permitted until 2026-03-20 08:58:53.950057137 +0000 UTC m=+544.045266291 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/58d0dca7-7d2f-4601-95a1-377c982d2d41-tls-key-pair") pod "nmstate-webhook-5f558f5558-mvkrl" (UID: "58d0dca7-7d2f-4601-95a1-377c982d2d41") : secret "openshift-nmstate-webhook" not found Mar 20 08:58:53.451687 master-0 kubenswrapper[27820]: I0320 08:58:53.450284 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/23a22e55-9f4f-4f31-81c5-328720dee978-ovs-socket\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.474076 master-0 kubenswrapper[27820]: I0320 08:58:53.474031 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnrwt\" (UniqueName: \"kubernetes.io/projected/4ef5015e-1e99-4f9e-ba7c-59b462ff2188-kube-api-access-mnrwt\") pod \"nmstate-metrics-9b8c8685d-d25js\" (UID: \"4ef5015e-1e99-4f9e-ba7c-59b462ff2188\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" Mar 20 08:58:53.477992 master-0 kubenswrapper[27820]: I0320 08:58:53.477957 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr"] Mar 20 08:58:53.478986 master-0 kubenswrapper[27820]: I0320 08:58:53.478896 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.483780 master-0 kubenswrapper[27820]: I0320 08:58:53.479155 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cx4mw\" (UniqueName: \"kubernetes.io/projected/58d0dca7-7d2f-4601-95a1-377c982d2d41-kube-api-access-cx4mw\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.483780 master-0 kubenswrapper[27820]: I0320 08:58:53.480975 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 20 08:58:53.483780 master-0 kubenswrapper[27820]: I0320 08:58:53.481116 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 20 08:58:53.505472 master-0 kubenswrapper[27820]: I0320 08:58:53.505399 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr"] Mar 20 08:58:53.519315 master-0 kubenswrapper[27820]: I0320 08:58:53.517863 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9sfd\" (UniqueName: \"kubernetes.io/projected/23a22e55-9f4f-4f31-81c5-328720dee978-kube-api-access-t9sfd\") pod \"nmstate-handler-7c6kf\" (UID: \"23a22e55-9f4f-4f31-81c5-328720dee978\") " pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.555457 master-0 kubenswrapper[27820]: I0320 08:58:53.553552 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ef2cb375-2652-47d1-bf48-a5411ff51a2c-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.555457 master-0 kubenswrapper[27820]: I0320 08:58:53.553625 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2cb375-2652-47d1-bf48-a5411ff51a2c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.555457 master-0 kubenswrapper[27820]: I0320 08:58:53.553813 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrr2w\" (UniqueName: \"kubernetes.io/projected/ef2cb375-2652-47d1-bf48-a5411ff51a2c-kube-api-access-mrr2w\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.603514 master-0 kubenswrapper[27820]: I0320 08:58:53.603451 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" Mar 20 08:58:53.672893 master-0 kubenswrapper[27820]: I0320 08:58:53.662196 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ef2cb375-2652-47d1-bf48-a5411ff51a2c-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.672893 master-0 kubenswrapper[27820]: I0320 08:58:53.662387 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2cb375-2652-47d1-bf48-a5411ff51a2c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.672893 master-0 kubenswrapper[27820]: I0320 08:58:53.662544 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrr2w\" (UniqueName: \"kubernetes.io/projected/ef2cb375-2652-47d1-bf48-a5411ff51a2c-kube-api-access-mrr2w\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.672893 master-0 kubenswrapper[27820]: I0320 08:58:53.671710 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/ef2cb375-2652-47d1-bf48-a5411ff51a2c-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.672893 master-0 kubenswrapper[27820]: I0320 08:58:53.671928 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:58:53.685447 master-0 kubenswrapper[27820]: I0320 08:58:53.685351 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7bd98dd549-btpxq"] Mar 20 08:58:53.686692 master-0 kubenswrapper[27820]: I0320 08:58:53.686340 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/ef2cb375-2652-47d1-bf48-a5411ff51a2c-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.687530 master-0 kubenswrapper[27820]: I0320 08:58:53.687510 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.702928 master-0 kubenswrapper[27820]: I0320 08:58:53.702412 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrr2w\" (UniqueName: \"kubernetes.io/projected/ef2cb375-2652-47d1-bf48-a5411ff51a2c-kube-api-access-mrr2w\") pod \"nmstate-console-plugin-86f58fcf4-frpgr\" (UID: \"ef2cb375-2652-47d1-bf48-a5411ff51a2c\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.715330 master-0 kubenswrapper[27820]: I0320 08:58:53.712824 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bd98dd549-btpxq"] Mar 20 08:58:53.733316 master-0 kubenswrapper[27820]: W0320 08:58:53.732708 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23a22e55_9f4f_4f31_81c5_328720dee978.slice/crio-9a0fd6e2d0f72588975ef96c397caf7ad3d6dae4d88e7715e09ee3957ce48f16 WatchSource:0}: Error finding container 9a0fd6e2d0f72588975ef96c397caf7ad3d6dae4d88e7715e09ee3957ce48f16: Status 404 returned error can't find the container with id 9a0fd6e2d0f72588975ef96c397caf7ad3d6dae4d88e7715e09ee3957ce48f16 Mar 20 08:58:53.812336 master-0 kubenswrapper[27820]: I0320 08:58:53.812190 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-z9frd" event={"ID":"51eafdd1-74d2-4441-96c6-e5edd8705e55","Type":"ContainerStarted","Data":"5713120624c61ce44a603736a0000401c6fe4df9de50ea0df5442ce10acd03f7"} Mar 20 08:58:53.812336 master-0 kubenswrapper[27820]: I0320 08:58:53.812272 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:58:53.814220 master-0 kubenswrapper[27820]: I0320 08:58:53.814187 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p2fhx" event={"ID":"f01da03b-1f5e-4ade-a4f9-e0dac32eb142","Type":"ContainerStarted","Data":"62fff403e8cf1f17fbfb73dbd3771d37b865890a8c29a6fba291f5db2f2d5b7a"} Mar 20 08:58:53.816936 master-0 kubenswrapper[27820]: I0320 08:58:53.816882 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7c6kf" event={"ID":"23a22e55-9f4f-4f31-81c5-328720dee978","Type":"ContainerStarted","Data":"9a0fd6e2d0f72588975ef96c397caf7ad3d6dae4d88e7715e09ee3957ce48f16"} Mar 20 08:58:53.845416 master-0 kubenswrapper[27820]: I0320 08:58:53.842740 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-z9frd" podStartSLOduration=2.664970736 podStartE2EDuration="3.842716127s" podCreationTimestamp="2026-03-20 08:58:50 +0000 UTC" firstStartedPulling="2026-03-20 08:58:52.246298827 +0000 UTC m=+542.341507971" lastFinishedPulling="2026-03-20 08:58:53.424044218 +0000 UTC m=+543.519253362" observedRunningTime="2026-03-20 08:58:53.837857596 +0000 UTC m=+543.933066770" watchObservedRunningTime="2026-03-20 08:58:53.842716127 +0000 UTC m=+543.937925291" Mar 20 08:58:53.865589 master-0 kubenswrapper[27820]: I0320 08:58:53.865533 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-config\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.865767 master-0 kubenswrapper[27820]: I0320 08:58:53.865614 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-trusted-ca-bundle\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.865767 master-0 kubenswrapper[27820]: I0320 08:58:53.865654 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-oauth-serving-cert\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.865767 master-0 kubenswrapper[27820]: I0320 08:58:53.865676 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-service-ca\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.865767 master-0 kubenswrapper[27820]: I0320 08:58:53.865700 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkjfv\" (UniqueName: \"kubernetes.io/projected/e50e09eb-9d18-474c-b9ef-74b91c219d00-kube-api-access-hkjfv\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.865767 master-0 kubenswrapper[27820]: I0320 08:58:53.865738 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-serving-cert\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.865920 master-0 kubenswrapper[27820]: I0320 08:58:53.865803 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-oauth-config\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.887218 master-0 kubenswrapper[27820]: I0320 08:58:53.887165 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.967657 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-oauth-config\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.967718 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-config\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.967979 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/58d0dca7-7d2f-4601-95a1-377c982d2d41-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.968010 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-trusted-ca-bundle\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.968052 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-oauth-serving-cert\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.968084 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-service-ca\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.968111 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkjfv\" (UniqueName: \"kubernetes.io/projected/e50e09eb-9d18-474c-b9ef-74b91c219d00-kube-api-access-hkjfv\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.968157 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-serving-cert\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.969865 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-service-ca\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.971879 master-0 kubenswrapper[27820]: I0320 08:58:53.971895 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-oauth-config\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.974001 master-0 kubenswrapper[27820]: I0320 08:58:53.973710 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-config\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.974001 master-0 kubenswrapper[27820]: I0320 08:58:53.973710 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-oauth-serving-cert\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.976379 master-0 kubenswrapper[27820]: I0320 08:58:53.974871 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e50e09eb-9d18-474c-b9ef-74b91c219d00-trusted-ca-bundle\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.981066 master-0 kubenswrapper[27820]: I0320 08:58:53.977158 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/58d0dca7-7d2f-4601-95a1-377c982d2d41-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-mvkrl\" (UID: \"58d0dca7-7d2f-4601-95a1-377c982d2d41\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:53.984523 master-0 kubenswrapper[27820]: I0320 08:58:53.984471 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e50e09eb-9d18-474c-b9ef-74b91c219d00-console-serving-cert\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:53.993165 master-0 kubenswrapper[27820]: I0320 08:58:53.993094 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkjfv\" (UniqueName: \"kubernetes.io/projected/e50e09eb-9d18-474c-b9ef-74b91c219d00-kube-api-access-hkjfv\") pod \"console-7bd98dd549-btpxq\" (UID: \"e50e09eb-9d18-474c-b9ef-74b91c219d00\") " pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:54.029291 master-0 kubenswrapper[27820]: I0320 08:58:54.028691 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:58:54.261593 master-0 kubenswrapper[27820]: I0320 08:58:54.261532 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:58:54.354367 master-0 kubenswrapper[27820]: I0320 08:58:54.351407 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-d25js"] Mar 20 08:58:54.370385 master-0 kubenswrapper[27820]: W0320 08:58:54.367220 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef2cb375_2652_47d1_bf48_a5411ff51a2c.slice/crio-6b220aed06f2c2d769928f9595dd8a3c16216ae30789f9e9d77db9b2db274350 WatchSource:0}: Error finding container 6b220aed06f2c2d769928f9595dd8a3c16216ae30789f9e9d77db9b2db274350: Status 404 returned error can't find the container with id 6b220aed06f2c2d769928f9595dd8a3c16216ae30789f9e9d77db9b2db274350 Mar 20 08:58:54.375737 master-0 kubenswrapper[27820]: I0320 08:58:54.372482 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr"] Mar 20 08:58:54.517685 master-0 kubenswrapper[27820]: I0320 08:58:54.509294 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7bd98dd549-btpxq"] Mar 20 08:58:54.522540 master-0 kubenswrapper[27820]: W0320 08:58:54.522490 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode50e09eb_9d18_474c_b9ef_74b91c219d00.slice/crio-049316a998a0623e05b89ffdbfd63522bc33eff893ae43affb86fce6ee63a73b WatchSource:0}: Error finding container 049316a998a0623e05b89ffdbfd63522bc33eff893ae43affb86fce6ee63a73b: Status 404 returned error can't find the container with id 049316a998a0623e05b89ffdbfd63522bc33eff893ae43affb86fce6ee63a73b Mar 20 08:58:54.798118 master-0 kubenswrapper[27820]: I0320 08:58:54.797663 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl"] Mar 20 08:58:54.801014 master-0 kubenswrapper[27820]: W0320 08:58:54.800972 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d0dca7_7d2f_4601_95a1_377c982d2d41.slice/crio-5daeb49ad5fa1bd8145def36e7daa70160bbc16705334d38883fc93786f3977d WatchSource:0}: Error finding container 5daeb49ad5fa1bd8145def36e7daa70160bbc16705334d38883fc93786f3977d: Status 404 returned error can't find the container with id 5daeb49ad5fa1bd8145def36e7daa70160bbc16705334d38883fc93786f3977d Mar 20 08:58:54.825285 master-0 kubenswrapper[27820]: I0320 08:58:54.825221 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd98dd549-btpxq" event={"ID":"e50e09eb-9d18-474c-b9ef-74b91c219d00","Type":"ContainerStarted","Data":"3b5a0de1329213c9d61835e95cdb4a5c4db52ad0920565575eab7a311623a72e"} Mar 20 08:58:54.825285 master-0 kubenswrapper[27820]: I0320 08:58:54.825284 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7bd98dd549-btpxq" event={"ID":"e50e09eb-9d18-474c-b9ef-74b91c219d00","Type":"ContainerStarted","Data":"049316a998a0623e05b89ffdbfd63522bc33eff893ae43affb86fce6ee63a73b"} Mar 20 08:58:54.826691 master-0 kubenswrapper[27820]: I0320 08:58:54.826627 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" event={"ID":"ef2cb375-2652-47d1-bf48-a5411ff51a2c","Type":"ContainerStarted","Data":"6b220aed06f2c2d769928f9595dd8a3c16216ae30789f9e9d77db9b2db274350"} Mar 20 08:58:54.828012 master-0 kubenswrapper[27820]: I0320 08:58:54.827972 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" event={"ID":"4ef5015e-1e99-4f9e-ba7c-59b462ff2188","Type":"ContainerStarted","Data":"f046b23ede3f98a705dd7c5d7d8190d37acd091a11d25e4810176fa1cc40eb6d"} Mar 20 08:58:54.830226 master-0 kubenswrapper[27820]: I0320 08:58:54.829569 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" event={"ID":"58d0dca7-7d2f-4601-95a1-377c982d2d41","Type":"ContainerStarted","Data":"5daeb49ad5fa1bd8145def36e7daa70160bbc16705334d38883fc93786f3977d"} Mar 20 08:58:54.851193 master-0 kubenswrapper[27820]: I0320 08:58:54.851109 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7bd98dd549-btpxq" podStartSLOduration=1.851086148 podStartE2EDuration="1.851086148s" podCreationTimestamp="2026-03-20 08:58:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:54.849117344 +0000 UTC m=+544.944326528" watchObservedRunningTime="2026-03-20 08:58:54.851086148 +0000 UTC m=+544.946295302" Mar 20 08:58:55.843842 master-0 kubenswrapper[27820]: I0320 08:58:55.843760 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-p2fhx" event={"ID":"f01da03b-1f5e-4ade-a4f9-e0dac32eb142","Type":"ContainerStarted","Data":"a4232585ed11c80d0e39a78f28154f4b438a7606c49f8b3e82896c8bd05b4cbb"} Mar 20 08:58:55.867623 master-0 kubenswrapper[27820]: I0320 08:58:55.867528 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-p2fhx" podStartSLOduration=3.718092438 podStartE2EDuration="5.867511704s" podCreationTimestamp="2026-03-20 08:58:50 +0000 UTC" firstStartedPulling="2026-03-20 08:58:52.934908308 +0000 UTC m=+543.030117452" lastFinishedPulling="2026-03-20 08:58:55.084327564 +0000 UTC m=+545.179536718" observedRunningTime="2026-03-20 08:58:55.862102049 +0000 UTC m=+545.957311203" watchObservedRunningTime="2026-03-20 08:58:55.867511704 +0000 UTC m=+545.962720858" Mar 20 08:58:56.850637 master-0 kubenswrapper[27820]: I0320 08:58:56.850124 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-p2fhx" Mar 20 08:59:00.888231 master-0 kubenswrapper[27820]: I0320 08:59:00.888175 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" event={"ID":"4ef5015e-1e99-4f9e-ba7c-59b462ff2188","Type":"ContainerStarted","Data":"eb5b83b675d24ceaa976dc54db2501fbc785d709bbc42c9d04bd72a754e02ade"} Mar 20 08:59:00.891044 master-0 kubenswrapper[27820]: I0320 08:59:00.890600 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" event={"ID":"bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8","Type":"ContainerStarted","Data":"471ae6a8da392ef1bec7f668f40bb41d5c3d68a9c19ea2abfc80316e77dba9b1"} Mar 20 08:59:00.891044 master-0 kubenswrapper[27820]: I0320 08:59:00.890686 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:59:00.898111 master-0 kubenswrapper[27820]: I0320 08:59:00.898042 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" event={"ID":"58d0dca7-7d2f-4601-95a1-377c982d2d41","Type":"ContainerStarted","Data":"74dcc4833ce3219d163253c5eb08b6c15b0ce98748f04df676216a2db2c2b823"} Mar 20 08:59:00.898331 master-0 kubenswrapper[27820]: I0320 08:59:00.898162 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:59:00.905367 master-0 kubenswrapper[27820]: I0320 08:59:00.905316 27820 generic.go:334] "Generic (PLEG): container finished" podID="2854e640-52d9-4a65-8b6c-4bc273a80668" containerID="566dba1cc03b28c0e48da58d2b5cbb0799c3ef1b9a57c589bc8e5d1c13a4ebce" exitCode=0 Mar 20 08:59:00.905367 master-0 kubenswrapper[27820]: I0320 08:59:00.905368 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerDied","Data":"566dba1cc03b28c0e48da58d2b5cbb0799c3ef1b9a57c589bc8e5d1c13a4ebce"} Mar 20 08:59:00.934638 master-0 kubenswrapper[27820]: I0320 08:59:00.933164 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" podStartSLOduration=1.869045894 podStartE2EDuration="10.933141578s" podCreationTimestamp="2026-03-20 08:58:50 +0000 UTC" firstStartedPulling="2026-03-20 08:58:51.520367795 +0000 UTC m=+541.615576939" lastFinishedPulling="2026-03-20 08:59:00.584463469 +0000 UTC m=+550.679672623" observedRunningTime="2026-03-20 08:59:00.90831378 +0000 UTC m=+551.003522944" watchObservedRunningTime="2026-03-20 08:59:00.933141578 +0000 UTC m=+551.028350732" Mar 20 08:59:00.990759 master-0 kubenswrapper[27820]: I0320 08:59:00.942752 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" podStartSLOduration=2.242917644 podStartE2EDuration="7.942702594s" podCreationTimestamp="2026-03-20 08:58:53 +0000 UTC" firstStartedPulling="2026-03-20 08:58:54.804972778 +0000 UTC m=+544.900181922" lastFinishedPulling="2026-03-20 08:59:00.504757728 +0000 UTC m=+550.599966872" observedRunningTime="2026-03-20 08:59:00.927206987 +0000 UTC m=+551.022416131" watchObservedRunningTime="2026-03-20 08:59:00.942702594 +0000 UTC m=+551.037911738" Mar 20 08:59:01.919739 master-0 kubenswrapper[27820]: I0320 08:59:01.919637 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" event={"ID":"4ef5015e-1e99-4f9e-ba7c-59b462ff2188","Type":"ContainerStarted","Data":"1cc3e83b5dd432ec4269e9cbf327db00bccbf15885142c5de0d0d346a0a7dffd"} Mar 20 08:59:01.922610 master-0 kubenswrapper[27820]: I0320 08:59:01.922571 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7c6kf" event={"ID":"23a22e55-9f4f-4f31-81c5-328720dee978","Type":"ContainerStarted","Data":"5d662fabec93fb6299f334367332bf0ba13fddf631933305dc64cd794149d488"} Mar 20 08:59:01.923038 master-0 kubenswrapper[27820]: I0320 08:59:01.923014 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:59:01.924915 master-0 kubenswrapper[27820]: I0320 08:59:01.924883 27820 generic.go:334] "Generic (PLEG): container finished" podID="2854e640-52d9-4a65-8b6c-4bc273a80668" containerID="af6c4a122425498215117e55eb0f2f886ae2829edbafb3704c7ecaa681421a98" exitCode=0 Mar 20 08:59:01.925004 master-0 kubenswrapper[27820]: I0320 08:59:01.924941 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerDied","Data":"af6c4a122425498215117e55eb0f2f886ae2829edbafb3704c7ecaa681421a98"} Mar 20 08:59:01.929438 master-0 kubenswrapper[27820]: I0320 08:59:01.929397 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" event={"ID":"ef2cb375-2652-47d1-bf48-a5411ff51a2c","Type":"ContainerStarted","Data":"ba32f54ee0a3ef4d70622e2ec00c9280b131b69882c1adec065ff01f9b514eb7"} Mar 20 08:59:01.943671 master-0 kubenswrapper[27820]: I0320 08:59:01.943604 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-d25js" podStartSLOduration=2.805100137 podStartE2EDuration="8.943586013s" podCreationTimestamp="2026-03-20 08:58:53 +0000 UTC" firstStartedPulling="2026-03-20 08:58:54.366082397 +0000 UTC m=+544.461291531" lastFinishedPulling="2026-03-20 08:59:00.504568263 +0000 UTC m=+550.599777407" observedRunningTime="2026-03-20 08:59:01.938554988 +0000 UTC m=+552.033764142" watchObservedRunningTime="2026-03-20 08:59:01.943586013 +0000 UTC m=+552.038795167" Mar 20 08:59:01.991167 master-0 kubenswrapper[27820]: I0320 08:59:01.990865 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7c6kf" podStartSLOduration=2.193458675 podStartE2EDuration="8.990840903s" podCreationTimestamp="2026-03-20 08:58:53 +0000 UTC" firstStartedPulling="2026-03-20 08:58:53.739558565 +0000 UTC m=+543.834767709" lastFinishedPulling="2026-03-20 08:59:00.536940793 +0000 UTC m=+550.632149937" observedRunningTime="2026-03-20 08:59:01.970926488 +0000 UTC m=+552.066135662" watchObservedRunningTime="2026-03-20 08:59:01.990840903 +0000 UTC m=+552.086050047" Mar 20 08:59:02.032504 master-0 kubenswrapper[27820]: I0320 08:59:02.032412 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-frpgr" podStartSLOduration=1.662508201 podStartE2EDuration="9.0323956s" podCreationTimestamp="2026-03-20 08:58:53 +0000 UTC" firstStartedPulling="2026-03-20 08:58:54.372645814 +0000 UTC m=+544.467854958" lastFinishedPulling="2026-03-20 08:59:01.742533213 +0000 UTC m=+551.837742357" observedRunningTime="2026-03-20 08:59:02.028718741 +0000 UTC m=+552.123927895" watchObservedRunningTime="2026-03-20 08:59:02.0323956 +0000 UTC m=+552.127604744" Mar 20 08:59:02.566029 master-0 kubenswrapper[27820]: I0320 08:59:02.565958 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-p2fhx" Mar 20 08:59:02.940731 master-0 kubenswrapper[27820]: I0320 08:59:02.940671 27820 generic.go:334] "Generic (PLEG): container finished" podID="2854e640-52d9-4a65-8b6c-4bc273a80668" containerID="9b35f50f990eac9ec0919ea831cf7ca4002d5188b46f57dca8f4a83f4345e86d" exitCode=0 Mar 20 08:59:02.941322 master-0 kubenswrapper[27820]: I0320 08:59:02.940813 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerDied","Data":"9b35f50f990eac9ec0919ea831cf7ca4002d5188b46f57dca8f4a83f4345e86d"} Mar 20 08:59:03.953574 master-0 kubenswrapper[27820]: I0320 08:59:03.953250 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"877b83372318e349290db69c6d41711029702543a80317cf940150e76e41b4f2"} Mar 20 08:59:03.953574 master-0 kubenswrapper[27820]: I0320 08:59:03.953575 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"08571ff3ce790010cfab507cebc37906fa618ae8f39ec8d24133f2fecb58d48f"} Mar 20 08:59:03.954721 master-0 kubenswrapper[27820]: I0320 08:59:03.953596 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"3b345dbc9c5772ff210c94f8814633d4819615cac011319babd9bd662bb03f8e"} Mar 20 08:59:03.954721 master-0 kubenswrapper[27820]: I0320 08:59:03.953609 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"e15b7f0f8063a3d90f9098d5dfacbb20559be6b5182a92ae37e76810066b5a23"} Mar 20 08:59:03.954721 master-0 kubenswrapper[27820]: I0320 08:59:03.953622 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"ebc9c8f71678d1a3b810ae9131fb53ffca5142d2bd7f8c7e2dde395c7f147b4f"} Mar 20 08:59:03.954721 master-0 kubenswrapper[27820]: I0320 08:59:03.953634 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-tl7kr" event={"ID":"2854e640-52d9-4a65-8b6c-4bc273a80668","Type":"ContainerStarted","Data":"f905496567b315c4470d4023e20a95d6a0a64b05ef1d9f4afef2c4843340e406"} Mar 20 08:59:04.029874 master-0 kubenswrapper[27820]: I0320 08:59:04.029714 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:59:04.029874 master-0 kubenswrapper[27820]: I0320 08:59:04.029766 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:59:04.037706 master-0 kubenswrapper[27820]: I0320 08:59:04.036668 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:59:04.056918 master-0 kubenswrapper[27820]: I0320 08:59:04.056825 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-tl7kr" podStartSLOduration=4.782103895 podStartE2EDuration="14.056809798s" podCreationTimestamp="2026-03-20 08:58:50 +0000 UTC" firstStartedPulling="2026-03-20 08:58:51.230950479 +0000 UTC m=+541.326159623" lastFinishedPulling="2026-03-20 08:59:00.505656372 +0000 UTC m=+550.600865526" observedRunningTime="2026-03-20 08:59:03.981616657 +0000 UTC m=+554.076825831" watchObservedRunningTime="2026-03-20 08:59:04.056809798 +0000 UTC m=+554.152018942" Mar 20 08:59:04.960305 master-0 kubenswrapper[27820]: I0320 08:59:04.960242 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:59:04.964485 master-0 kubenswrapper[27820]: I0320 08:59:04.964422 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7bd98dd549-btpxq" Mar 20 08:59:05.054737 master-0 kubenswrapper[27820]: I0320 08:59:05.054457 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-89bcb965d-7zclw"] Mar 20 08:59:05.992958 master-0 kubenswrapper[27820]: I0320 08:59:05.992892 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:59:06.028947 master-0 kubenswrapper[27820]: I0320 08:59:06.028878 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:59:08.703000 master-0 kubenswrapper[27820]: I0320 08:59:08.702933 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7c6kf" Mar 20 08:59:10.985582 master-0 kubenswrapper[27820]: I0320 08:59:10.985515 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-pn2fv" Mar 20 08:59:11.692310 master-0 kubenswrapper[27820]: I0320 08:59:11.692022 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-z9frd" Mar 20 08:59:14.267589 master-0 kubenswrapper[27820]: I0320 08:59:14.267512 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-mvkrl" Mar 20 08:59:19.138387 master-0 kubenswrapper[27820]: I0320 08:59:19.138279 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-vfwrb"] Mar 20 08:59:19.139991 master-0 kubenswrapper[27820]: I0320 08:59:19.139301 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.141594 master-0 kubenswrapper[27820]: I0320 08:59:19.141547 27820 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 20 08:59:19.177898 master-0 kubenswrapper[27820]: I0320 08:59:19.177825 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-vfwrb"] Mar 20 08:59:19.238776 master-0 kubenswrapper[27820]: I0320 08:59:19.238692 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-lvmd-config\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.238971 master-0 kubenswrapper[27820]: I0320 08:59:19.238863 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-csi-plugin-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.238971 master-0 kubenswrapper[27820]: I0320 08:59:19.238909 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-registration-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.238971 master-0 kubenswrapper[27820]: I0320 08:59:19.238944 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-sys\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239070 master-0 kubenswrapper[27820]: I0320 08:59:19.239025 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-pod-volumes-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239148 master-0 kubenswrapper[27820]: I0320 08:59:19.239115 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/248caf38-da29-4afe-b566-cb5b9d718797-metrics-cert\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239199 master-0 kubenswrapper[27820]: I0320 08:59:19.239176 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-file-lock-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239292 master-0 kubenswrapper[27820]: I0320 08:59:19.239243 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7h5\" (UniqueName: \"kubernetes.io/projected/248caf38-da29-4afe-b566-cb5b9d718797-kube-api-access-9w7h5\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239340 master-0 kubenswrapper[27820]: I0320 08:59:19.239317 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-node-plugin-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239383 master-0 kubenswrapper[27820]: I0320 08:59:19.239352 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-run-udev\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.239431 master-0 kubenswrapper[27820]: I0320 08:59:19.239406 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-device-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.340641 master-0 kubenswrapper[27820]: I0320 08:59:19.340471 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-lvmd-config\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.340641 master-0 kubenswrapper[27820]: I0320 08:59:19.340570 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-csi-plugin-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.340641 master-0 kubenswrapper[27820]: I0320 08:59:19.340605 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-registration-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.340641 master-0 kubenswrapper[27820]: I0320 08:59:19.340628 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-sys\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.340641 master-0 kubenswrapper[27820]: I0320 08:59:19.340658 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-pod-volumes-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340687 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/248caf38-da29-4afe-b566-cb5b9d718797-metrics-cert\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340724 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-file-lock-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340764 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7h5\" (UniqueName: \"kubernetes.io/projected/248caf38-da29-4afe-b566-cb5b9d718797-kube-api-access-9w7h5\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340775 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-registration-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340791 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-node-plugin-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340818 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-run-udev\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340834 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-lvmd-config\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340896 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-device-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340855 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-device-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340932 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-sys\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.340977 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-pod-volumes-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.341354 master-0 kubenswrapper[27820]: I0320 08:59:19.341003 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-csi-plugin-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.342577 master-0 kubenswrapper[27820]: I0320 08:59:19.342301 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-run-udev\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.342577 master-0 kubenswrapper[27820]: I0320 08:59:19.342455 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-file-lock-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.342577 master-0 kubenswrapper[27820]: I0320 08:59:19.342518 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/248caf38-da29-4afe-b566-cb5b9d718797-node-plugin-dir\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.346416 master-0 kubenswrapper[27820]: I0320 08:59:19.346356 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/248caf38-da29-4afe-b566-cb5b9d718797-metrics-cert\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.361969 master-0 kubenswrapper[27820]: I0320 08:59:19.361913 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7h5\" (UniqueName: \"kubernetes.io/projected/248caf38-da29-4afe-b566-cb5b9d718797-kube-api-access-9w7h5\") pod \"vg-manager-vfwrb\" (UID: \"248caf38-da29-4afe-b566-cb5b9d718797\") " pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:19.483638 master-0 kubenswrapper[27820]: I0320 08:59:19.483469 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:20.090391 master-0 kubenswrapper[27820]: I0320 08:59:20.090353 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-vfwrb"] Mar 20 08:59:20.132559 master-0 kubenswrapper[27820]: I0320 08:59:20.132497 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-vfwrb" event={"ID":"248caf38-da29-4afe-b566-cb5b9d718797","Type":"ContainerStarted","Data":"e20302f577c80ec55420b073c6764cf2d05995e622d5bfc04101e17805affd40"} Mar 20 08:59:20.997288 master-0 kubenswrapper[27820]: I0320 08:59:20.996835 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-tl7kr" Mar 20 08:59:21.141532 master-0 kubenswrapper[27820]: I0320 08:59:21.141450 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-vfwrb" event={"ID":"248caf38-da29-4afe-b566-cb5b9d718797","Type":"ContainerStarted","Data":"07b3b51145cd33f620fb5afbc916189687cea36f9488ec57af351893216472b4"} Mar 20 08:59:22.153878 master-0 kubenswrapper[27820]: I0320 08:59:22.153811 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-vfwrb_248caf38-da29-4afe-b566-cb5b9d718797/vg-manager/0.log" Mar 20 08:59:22.153878 master-0 kubenswrapper[27820]: I0320 08:59:22.153877 27820 generic.go:334] "Generic (PLEG): container finished" podID="248caf38-da29-4afe-b566-cb5b9d718797" containerID="07b3b51145cd33f620fb5afbc916189687cea36f9488ec57af351893216472b4" exitCode=1 Mar 20 08:59:22.154706 master-0 kubenswrapper[27820]: I0320 08:59:22.153912 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-vfwrb" event={"ID":"248caf38-da29-4afe-b566-cb5b9d718797","Type":"ContainerDied","Data":"07b3b51145cd33f620fb5afbc916189687cea36f9488ec57af351893216472b4"} Mar 20 08:59:22.154706 master-0 kubenswrapper[27820]: I0320 08:59:22.154538 27820 scope.go:117] "RemoveContainer" containerID="07b3b51145cd33f620fb5afbc916189687cea36f9488ec57af351893216472b4" Mar 20 08:59:22.613509 master-0 kubenswrapper[27820]: I0320 08:59:22.613443 27820 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 20 08:59:23.091079 master-0 kubenswrapper[27820]: I0320 08:59:23.089254 27820 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-20T08:59:22.613482331Z","Handler":null,"Name":""} Mar 20 08:59:23.091079 master-0 kubenswrapper[27820]: I0320 08:59:23.091003 27820 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 20 08:59:23.091079 master-0 kubenswrapper[27820]: I0320 08:59:23.091025 27820 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 20 08:59:23.162982 master-0 kubenswrapper[27820]: I0320 08:59:23.162940 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-vfwrb_248caf38-da29-4afe-b566-cb5b9d718797/vg-manager/0.log" Mar 20 08:59:23.163461 master-0 kubenswrapper[27820]: I0320 08:59:23.163006 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-vfwrb" event={"ID":"248caf38-da29-4afe-b566-cb5b9d718797","Type":"ContainerStarted","Data":"b813367922a9604f1d04c15eabaf9581017b4a3ae9fa66c39804e1ad219d7d12"} Mar 20 08:59:23.194237 master-0 kubenswrapper[27820]: I0320 08:59:23.194168 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-vfwrb" podStartSLOduration=4.19411307 podStartE2EDuration="4.19411307s" podCreationTimestamp="2026-03-20 08:59:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:59:21.170530385 +0000 UTC m=+571.265739539" watchObservedRunningTime="2026-03-20 08:59:23.19411307 +0000 UTC m=+573.289322214" Mar 20 08:59:26.161294 master-0 kubenswrapper[27820]: I0320 08:59:26.160719 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mzl48"] Mar 20 08:59:26.161871 master-0 kubenswrapper[27820]: I0320 08:59:26.161704 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:26.165768 master-0 kubenswrapper[27820]: I0320 08:59:26.165718 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 20 08:59:26.165945 master-0 kubenswrapper[27820]: I0320 08:59:26.165886 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 20 08:59:26.199289 master-0 kubenswrapper[27820]: I0320 08:59:26.197114 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxfv8\" (UniqueName: \"kubernetes.io/projected/4bc06990-cfb0-4664-9ae7-f787327d1401-kube-api-access-hxfv8\") pod \"openstack-operator-index-mzl48\" (UID: \"4bc06990-cfb0-4664-9ae7-f787327d1401\") " pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:26.253475 master-0 kubenswrapper[27820]: I0320 08:59:26.251400 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mzl48"] Mar 20 08:59:26.303365 master-0 kubenswrapper[27820]: I0320 08:59:26.299319 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxfv8\" (UniqueName: \"kubernetes.io/projected/4bc06990-cfb0-4664-9ae7-f787327d1401-kube-api-access-hxfv8\") pod \"openstack-operator-index-mzl48\" (UID: \"4bc06990-cfb0-4664-9ae7-f787327d1401\") " pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:26.324841 master-0 kubenswrapper[27820]: I0320 08:59:26.324781 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxfv8\" (UniqueName: \"kubernetes.io/projected/4bc06990-cfb0-4664-9ae7-f787327d1401-kube-api-access-hxfv8\") pod \"openstack-operator-index-mzl48\" (UID: \"4bc06990-cfb0-4664-9ae7-f787327d1401\") " pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:26.497774 master-0 kubenswrapper[27820]: I0320 08:59:26.497660 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:26.929302 master-0 kubenswrapper[27820]: I0320 08:59:26.929219 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mzl48"] Mar 20 08:59:27.258391 master-0 kubenswrapper[27820]: I0320 08:59:27.258271 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzl48" event={"ID":"4bc06990-cfb0-4664-9ae7-f787327d1401","Type":"ContainerStarted","Data":"7d9ae86a0c5435aebb97776643aec16b23799f3f6579dab0b4512428c08e73dd"} Mar 20 08:59:29.277372 master-0 kubenswrapper[27820]: I0320 08:59:29.277305 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mzl48" event={"ID":"4bc06990-cfb0-4664-9ae7-f787327d1401","Type":"ContainerStarted","Data":"990fc834d79f62601a2109b05110d9eb31e3a9a96063aef1cb37901687efdc62"} Mar 20 08:59:29.297597 master-0 kubenswrapper[27820]: I0320 08:59:29.297499 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mzl48" podStartSLOduration=1.879534759 podStartE2EDuration="3.297477253s" podCreationTimestamp="2026-03-20 08:59:26 +0000 UTC" firstStartedPulling="2026-03-20 08:59:26.948170447 +0000 UTC m=+577.043379591" lastFinishedPulling="2026-03-20 08:59:28.366112941 +0000 UTC m=+578.461322085" observedRunningTime="2026-03-20 08:59:29.290897976 +0000 UTC m=+579.386107110" watchObservedRunningTime="2026-03-20 08:59:29.297477253 +0000 UTC m=+579.392686397" Mar 20 08:59:29.484087 master-0 kubenswrapper[27820]: I0320 08:59:29.484011 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:29.486658 master-0 kubenswrapper[27820]: I0320 08:59:29.486624 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:30.110222 master-0 kubenswrapper[27820]: I0320 08:59:30.110140 27820 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-89bcb965d-7zclw" podUID="a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" containerName="console" containerID="cri-o://749c3d811d83c146bebdef1ff108e3cb1617a24030c78e12b09488a78025ff44" gracePeriod=15 Mar 20 08:59:30.293607 master-0 kubenswrapper[27820]: I0320 08:59:30.293551 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-89bcb965d-7zclw_a7f08d87-ea77-45cc-a09d-c5a26f7c9f20/console/0.log" Mar 20 08:59:30.293607 master-0 kubenswrapper[27820]: I0320 08:59:30.293605 27820 generic.go:334] "Generic (PLEG): container finished" podID="a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" containerID="749c3d811d83c146bebdef1ff108e3cb1617a24030c78e12b09488a78025ff44" exitCode=2 Mar 20 08:59:30.294235 master-0 kubenswrapper[27820]: I0320 08:59:30.294163 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-89bcb965d-7zclw" event={"ID":"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20","Type":"ContainerDied","Data":"749c3d811d83c146bebdef1ff108e3cb1617a24030c78e12b09488a78025ff44"} Mar 20 08:59:30.294797 master-0 kubenswrapper[27820]: I0320 08:59:30.294544 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:30.295361 master-0 kubenswrapper[27820]: I0320 08:59:30.295319 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-vfwrb" Mar 20 08:59:30.936715 master-0 kubenswrapper[27820]: I0320 08:59:30.936686 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-89bcb965d-7zclw_a7f08d87-ea77-45cc-a09d-c5a26f7c9f20/console/0.log" Mar 20 08:59:30.936878 master-0 kubenswrapper[27820]: I0320 08:59:30.936753 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:59:30.987176 master-0 kubenswrapper[27820]: I0320 08:59:30.986837 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-service-ca\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.987176 master-0 kubenswrapper[27820]: I0320 08:59:30.987117 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-serving-cert\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.987176 master-0 kubenswrapper[27820]: I0320 08:59:30.987162 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-oauth-config\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.987500 master-0 kubenswrapper[27820]: I0320 08:59:30.987207 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-oauth-serving-cert\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.987500 master-0 kubenswrapper[27820]: I0320 08:59:30.987227 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zck22\" (UniqueName: \"kubernetes.io/projected/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-kube-api-access-zck22\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.987500 master-0 kubenswrapper[27820]: I0320 08:59:30.987295 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-config\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.987500 master-0 kubenswrapper[27820]: I0320 08:59:30.987365 27820 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-trusted-ca-bundle\") pod \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\" (UID: \"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20\") " Mar 20 08:59:30.988171 master-0 kubenswrapper[27820]: I0320 08:59:30.988141 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:59:30.988684 master-0 kubenswrapper[27820]: I0320 08:59:30.988658 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-config" (OuterVolumeSpecName: "console-config") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:59:30.988818 master-0 kubenswrapper[27820]: I0320 08:59:30.988775 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-service-ca" (OuterVolumeSpecName: "service-ca") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:59:30.989153 master-0 kubenswrapper[27820]: I0320 08:59:30.989110 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:59:30.991657 master-0 kubenswrapper[27820]: I0320 08:59:30.991611 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:30.991657 master-0 kubenswrapper[27820]: I0320 08:59:30.991629 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:30.992097 master-0 kubenswrapper[27820]: I0320 08:59:30.992064 27820 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-kube-api-access-zck22" (OuterVolumeSpecName: "kube-api-access-zck22") pod "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" (UID: "a7f08d87-ea77-45cc-a09d-c5a26f7c9f20"). InnerVolumeSpecName "kube-api-access-zck22". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:31.089790 master-0 kubenswrapper[27820]: I0320 08:59:31.089732 27820 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.089790 master-0 kubenswrapper[27820]: I0320 08:59:31.089776 27820 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.089790 master-0 kubenswrapper[27820]: I0320 08:59:31.089790 27820 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.089790 master-0 kubenswrapper[27820]: I0320 08:59:31.089799 27820 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.089790 master-0 kubenswrapper[27820]: I0320 08:59:31.089809 27820 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zck22\" (UniqueName: \"kubernetes.io/projected/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-kube-api-access-zck22\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.090218 master-0 kubenswrapper[27820]: I0320 08:59:31.089818 27820 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-console-config\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.090218 master-0 kubenswrapper[27820]: I0320 08:59:31.089827 27820 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 20 08:59:31.304096 master-0 kubenswrapper[27820]: I0320 08:59:31.303973 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-89bcb965d-7zclw_a7f08d87-ea77-45cc-a09d-c5a26f7c9f20/console/0.log" Mar 20 08:59:31.305288 master-0 kubenswrapper[27820]: I0320 08:59:31.304095 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-89bcb965d-7zclw" event={"ID":"a7f08d87-ea77-45cc-a09d-c5a26f7c9f20","Type":"ContainerDied","Data":"c2e65e3c29f8e4e51ead0af442c2d921aadabdecfbc2a40546f8588a6884831f"} Mar 20 08:59:31.305288 master-0 kubenswrapper[27820]: I0320 08:59:31.304189 27820 scope.go:117] "RemoveContainer" containerID="749c3d811d83c146bebdef1ff108e3cb1617a24030c78e12b09488a78025ff44" Mar 20 08:59:31.305473 master-0 kubenswrapper[27820]: I0320 08:59:31.305437 27820 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-89bcb965d-7zclw" Mar 20 08:59:31.349065 master-0 kubenswrapper[27820]: I0320 08:59:31.349010 27820 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-89bcb965d-7zclw"] Mar 20 08:59:31.359484 master-0 kubenswrapper[27820]: I0320 08:59:31.359441 27820 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-89bcb965d-7zclw"] Mar 20 08:59:32.381287 master-0 kubenswrapper[27820]: I0320 08:59:32.376881 27820 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" path="/var/lib/kubelet/pods/a7f08d87-ea77-45cc-a09d-c5a26f7c9f20/volumes" Mar 20 08:59:36.498728 master-0 kubenswrapper[27820]: I0320 08:59:36.498669 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:36.498728 master-0 kubenswrapper[27820]: I0320 08:59:36.498717 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:36.523818 master-0 kubenswrapper[27820]: I0320 08:59:36.523751 27820 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 08:59:37.443797 master-0 kubenswrapper[27820]: I0320 08:59:37.443741 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-mzl48" Mar 20 09:04:38.750218 master-0 kubenswrapper[27820]: I0320 09:04:38.750038 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jhtnq/must-gather-kc6hc"] Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: E0320 09:04:38.750771 27820 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" containerName="console" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.750796 27820 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" containerName="console" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.751120 27820 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7f08d87-ea77-45cc-a09d-c5a26f7c9f20" containerName="console" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.752775 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.754998 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jhtnq"/"kube-root-ca.crt" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.756485 27820 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-jhtnq"/"openshift-service-ca.crt" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.828025 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3869f4e-e988-436d-b977-e97b5127620f-must-gather-output\") pod \"must-gather-kc6hc\" (UID: \"c3869f4e-e988-436d-b977-e97b5127620f\") " pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.828149 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjfcr\" (UniqueName: \"kubernetes.io/projected/c3869f4e-e988-436d-b977-e97b5127620f-kube-api-access-hjfcr\") pod \"must-gather-kc6hc\" (UID: \"c3869f4e-e988-436d-b977-e97b5127620f\") " pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.922348 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jhtnq/must-gather-6hstx"] Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.924441 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.929357 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjfcr\" (UniqueName: \"kubernetes.io/projected/c3869f4e-e988-436d-b977-e97b5127620f-kube-api-access-hjfcr\") pod \"must-gather-kc6hc\" (UID: \"c3869f4e-e988-436d-b977-e97b5127620f\") " pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.929470 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3869f4e-e988-436d-b977-e97b5127620f-must-gather-output\") pod \"must-gather-kc6hc\" (UID: \"c3869f4e-e988-436d-b977-e97b5127620f\") " pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.010588 master-0 kubenswrapper[27820]: I0320 09:04:38.930154 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/c3869f4e-e988-436d-b977-e97b5127620f-must-gather-output\") pod \"must-gather-kc6hc\" (UID: \"c3869f4e-e988-436d-b977-e97b5127620f\") " pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.031848 master-0 kubenswrapper[27820]: I0320 09:04:39.031747 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dftsb\" (UniqueName: \"kubernetes.io/projected/69ab4021-ea5e-4889-a91c-45e5ca578640-kube-api-access-dftsb\") pod \"must-gather-6hstx\" (UID: \"69ab4021-ea5e-4889-a91c-45e5ca578640\") " pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.032243 master-0 kubenswrapper[27820]: I0320 09:04:39.031873 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ab4021-ea5e-4889-a91c-45e5ca578640-must-gather-output\") pod \"must-gather-6hstx\" (UID: \"69ab4021-ea5e-4889-a91c-45e5ca578640\") " pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.134869 master-0 kubenswrapper[27820]: I0320 09:04:39.134788 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dftsb\" (UniqueName: \"kubernetes.io/projected/69ab4021-ea5e-4889-a91c-45e5ca578640-kube-api-access-dftsb\") pod \"must-gather-6hstx\" (UID: \"69ab4021-ea5e-4889-a91c-45e5ca578640\") " pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.135530 master-0 kubenswrapper[27820]: I0320 09:04:39.135428 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ab4021-ea5e-4889-a91c-45e5ca578640-must-gather-output\") pod \"must-gather-6hstx\" (UID: \"69ab4021-ea5e-4889-a91c-45e5ca578640\") " pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.135594 master-0 kubenswrapper[27820]: I0320 09:04:39.135514 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/69ab4021-ea5e-4889-a91c-45e5ca578640-must-gather-output\") pod \"must-gather-6hstx\" (UID: \"69ab4021-ea5e-4889-a91c-45e5ca578640\") " pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.141655 master-0 kubenswrapper[27820]: I0320 09:04:39.141598 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jhtnq/must-gather-kc6hc"] Mar 20 09:04:39.148978 master-0 kubenswrapper[27820]: I0320 09:04:39.148922 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jhtnq/must-gather-6hstx"] Mar 20 09:04:39.219289 master-0 kubenswrapper[27820]: I0320 09:04:39.218250 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dftsb\" (UniqueName: \"kubernetes.io/projected/69ab4021-ea5e-4889-a91c-45e5ca578640-kube-api-access-dftsb\") pod \"must-gather-6hstx\" (UID: \"69ab4021-ea5e-4889-a91c-45e5ca578640\") " pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.231287 master-0 kubenswrapper[27820]: I0320 09:04:39.223871 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjfcr\" (UniqueName: \"kubernetes.io/projected/c3869f4e-e988-436d-b977-e97b5127620f-kube-api-access-hjfcr\") pod \"must-gather-kc6hc\" (UID: \"c3869f4e-e988-436d-b977-e97b5127620f\") " pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.256291 master-0 kubenswrapper[27820]: I0320 09:04:39.241515 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jhtnq/must-gather-6hstx" Mar 20 09:04:39.368303 master-0 kubenswrapper[27820]: I0320 09:04:39.366943 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jhtnq/must-gather-kc6hc" Mar 20 09:04:39.833535 master-0 kubenswrapper[27820]: I0320 09:04:39.833493 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jhtnq/must-gather-6hstx"] Mar 20 09:04:39.834821 master-0 kubenswrapper[27820]: W0320 09:04:39.834777 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69ab4021_ea5e_4889_a91c_45e5ca578640.slice/crio-28bc54c2b225025d9476918ca5a82087b6bca0aa14b67f2b4167d3e7354ef4ec WatchSource:0}: Error finding container 28bc54c2b225025d9476918ca5a82087b6bca0aa14b67f2b4167d3e7354ef4ec: Status 404 returned error can't find the container with id 28bc54c2b225025d9476918ca5a82087b6bca0aa14b67f2b4167d3e7354ef4ec Mar 20 09:04:39.837183 master-0 kubenswrapper[27820]: I0320 09:04:39.837141 27820 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:04:39.925286 master-0 kubenswrapper[27820]: I0320 09:04:39.925205 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jhtnq/must-gather-kc6hc"] Mar 20 09:04:39.927755 master-0 kubenswrapper[27820]: W0320 09:04:39.927695 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3869f4e_e988_436d_b977_e97b5127620f.slice/crio-ac7fb581df8825b065486fd99afde1174bee71604aea71b29dc31fb97d4b539c WatchSource:0}: Error finding container ac7fb581df8825b065486fd99afde1174bee71604aea71b29dc31fb97d4b539c: Status 404 returned error can't find the container with id ac7fb581df8825b065486fd99afde1174bee71604aea71b29dc31fb97d4b539c Mar 20 09:04:40.431988 master-0 kubenswrapper[27820]: I0320 09:04:40.431908 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/must-gather-6hstx" event={"ID":"69ab4021-ea5e-4889-a91c-45e5ca578640","Type":"ContainerStarted","Data":"28bc54c2b225025d9476918ca5a82087b6bca0aa14b67f2b4167d3e7354ef4ec"} Mar 20 09:04:40.432949 master-0 kubenswrapper[27820]: I0320 09:04:40.432907 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/must-gather-kc6hc" event={"ID":"c3869f4e-e988-436d-b977-e97b5127620f","Type":"ContainerStarted","Data":"ac7fb581df8825b065486fd99afde1174bee71604aea71b29dc31fb97d4b539c"} Mar 20 09:04:49.535016 master-0 kubenswrapper[27820]: I0320 09:04:49.534906 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/must-gather-6hstx" event={"ID":"69ab4021-ea5e-4889-a91c-45e5ca578640","Type":"ContainerStarted","Data":"86ca6489fefd3b84511ede60fc631635ff4b3d79e8f78e3d3e14cbfb16f4ff76"} Mar 20 09:04:49.535583 master-0 kubenswrapper[27820]: I0320 09:04:49.535565 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/must-gather-6hstx" event={"ID":"69ab4021-ea5e-4889-a91c-45e5ca578640","Type":"ContainerStarted","Data":"1dfc61f5b80e8771a1a4ade4d441bd8e48692e2c51552cd79b14cead2bb63b22"} Mar 20 09:04:49.536940 master-0 kubenswrapper[27820]: I0320 09:04:49.536896 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/must-gather-kc6hc" event={"ID":"c3869f4e-e988-436d-b977-e97b5127620f","Type":"ContainerStarted","Data":"e768f04f665c18f2220f291cf49e69e15861e2ae386099e8b9c086eb4ababe82"} Mar 20 09:04:49.536940 master-0 kubenswrapper[27820]: I0320 09:04:49.536928 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/must-gather-kc6hc" event={"ID":"c3869f4e-e988-436d-b977-e97b5127620f","Type":"ContainerStarted","Data":"5d44ff4e5fc921943f4bce5a5a00353bbdd5d49af132bcc583a8b563b3d253a4"} Mar 20 09:04:49.600984 master-0 kubenswrapper[27820]: I0320 09:04:49.600922 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jhtnq/must-gather-6hstx" podStartSLOduration=2.617130517 podStartE2EDuration="11.600901577s" podCreationTimestamp="2026-03-20 09:04:38 +0000 UTC" firstStartedPulling="2026-03-20 09:04:39.837025961 +0000 UTC m=+889.932235115" lastFinishedPulling="2026-03-20 09:04:48.820797031 +0000 UTC m=+898.916006175" observedRunningTime="2026-03-20 09:04:49.578653978 +0000 UTC m=+899.673863152" watchObservedRunningTime="2026-03-20 09:04:49.600901577 +0000 UTC m=+899.696110721" Mar 20 09:04:49.604145 master-0 kubenswrapper[27820]: I0320 09:04:49.604102 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jhtnq/must-gather-kc6hc" podStartSLOduration=2.853529672 podStartE2EDuration="11.604090014s" podCreationTimestamp="2026-03-20 09:04:38 +0000 UTC" firstStartedPulling="2026-03-20 09:04:39.930466406 +0000 UTC m=+890.025675550" lastFinishedPulling="2026-03-20 09:04:48.681026748 +0000 UTC m=+898.776235892" observedRunningTime="2026-03-20 09:04:49.598249244 +0000 UTC m=+899.693458398" watchObservedRunningTime="2026-03-20 09:04:49.604090014 +0000 UTC m=+899.699299158" Mar 20 09:04:51.267808 master-0 kubenswrapper[27820]: I0320 09:04:51.267738 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-7d58488df-bzstx_bca4cc7c-839d-4877-b0aa-c07607fea404/cluster-version-operator/0.log" Mar 20 09:04:54.297288 master-0 kubenswrapper[27820]: I0320 09:04:54.296676 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-z9frd_51eafdd1-74d2-4441-96c6-e5edd8705e55/controller/0.log" Mar 20 09:04:54.306287 master-0 kubenswrapper[27820]: I0320 09:04:54.305735 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-z9frd_51eafdd1-74d2-4441-96c6-e5edd8705e55/kube-rbac-proxy/0.log" Mar 20 09:04:54.333562 master-0 kubenswrapper[27820]: I0320 09:04:54.333273 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/controller/0.log" Mar 20 09:04:54.374450 master-0 kubenswrapper[27820]: I0320 09:04:54.374033 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/frr/0.log" Mar 20 09:04:54.385890 master-0 kubenswrapper[27820]: I0320 09:04:54.385477 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/reloader/0.log" Mar 20 09:04:54.408290 master-0 kubenswrapper[27820]: I0320 09:04:54.407522 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/frr-metrics/0.log" Mar 20 09:04:54.421229 master-0 kubenswrapper[27820]: I0320 09:04:54.420315 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/kube-rbac-proxy/0.log" Mar 20 09:04:54.449292 master-0 kubenswrapper[27820]: I0320 09:04:54.448334 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/kube-rbac-proxy-frr/0.log" Mar 20 09:04:54.463815 master-0 kubenswrapper[27820]: I0320 09:04:54.460969 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-frr-files/0.log" Mar 20 09:04:54.487543 master-0 kubenswrapper[27820]: I0320 09:04:54.485953 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-reloader/0.log" Mar 20 09:04:54.499288 master-0 kubenswrapper[27820]: I0320 09:04:54.499090 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-metrics/0.log" Mar 20 09:04:54.521950 master-0 kubenswrapper[27820]: I0320 09:04:54.521863 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-pn2fv_bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8/frr-k8s-webhook-server/0.log" Mar 20 09:04:54.524619 master-0 kubenswrapper[27820]: I0320 09:04:54.522830 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 20 09:04:54.561899 master-0 kubenswrapper[27820]: I0320 09:04:54.560632 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6485fcfd64-dfmlb_df2f3936-c47f-46f1-acb5-0af23bf9bf1c/manager/0.log" Mar 20 09:04:54.582763 master-0 kubenswrapper[27820]: I0320 09:04:54.582723 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6465fb44b7-42g97_21e7b181-5c6b-433e-adf7-ebe2d7b45aa7/webhook-server/0.log" Mar 20 09:04:54.712354 master-0 kubenswrapper[27820]: I0320 09:04:54.708821 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p2fhx_f01da03b-1f5e-4ade-a4f9-e0dac32eb142/speaker/0.log" Mar 20 09:04:54.804290 master-0 kubenswrapper[27820]: I0320 09:04:54.803683 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p2fhx_f01da03b-1f5e-4ade-a4f9-e0dac32eb142/kube-rbac-proxy/0.log" Mar 20 09:04:54.882102 master-0 kubenswrapper[27820]: I0320 09:04:54.880421 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 20 09:04:55.154603 master-0 kubenswrapper[27820]: I0320 09:04:55.154562 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 20 09:04:55.274314 master-0 kubenswrapper[27820]: I0320 09:04:55.271752 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 20 09:04:55.320511 master-0 kubenswrapper[27820]: I0320 09:04:55.313548 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 20 09:04:55.342288 master-0 kubenswrapper[27820]: I0320 09:04:55.341572 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 20 09:04:55.366301 master-0 kubenswrapper[27820]: I0320 09:04:55.363931 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 20 09:04:55.376312 master-0 kubenswrapper[27820]: I0320 09:04:55.375413 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-frpgr_ef2cb375-2652-47d1-bf48-a5411ff51a2c/nmstate-console-plugin/0.log" Mar 20 09:04:55.401322 master-0 kubenswrapper[27820]: I0320 09:04:55.400755 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 20 09:04:55.401572 master-0 kubenswrapper[27820]: I0320 09:04:55.401528 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7c6kf_23a22e55-9f4f-4f31-81c5-328720dee978/nmstate-handler/0.log" Mar 20 09:04:55.430368 master-0 kubenswrapper[27820]: I0320 09:04:55.426439 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-d25js_4ef5015e-1e99-4f9e-ba7c-59b462ff2188/nmstate-metrics/0.log" Mar 20 09:04:55.456374 master-0 kubenswrapper[27820]: E0320 09:04:55.456296 27820 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 192.168.32.10:45844->192.168.32.10:44157: read tcp 192.168.32.10:45844->192.168.32.10:44157: read: connection reset by peer Mar 20 09:04:55.457285 master-0 kubenswrapper[27820]: I0320 09:04:55.457190 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-d25js_4ef5015e-1e99-4f9e-ba7c-59b462ff2188/kube-rbac-proxy/0.log" Mar 20 09:04:55.489540 master-0 kubenswrapper[27820]: I0320 09:04:55.488597 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_169353ee-c927-4483-8976-b9ca08b0a6d1/installer/0.log" Mar 20 09:04:55.492350 master-0 kubenswrapper[27820]: I0320 09:04:55.491898 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-wv4sj_1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f/nmstate-operator/0.log" Mar 20 09:04:55.514509 master-0 kubenswrapper[27820]: I0320 09:04:55.514458 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-mvkrl_58d0dca7-7d2f-4601-95a1-377c982d2d41/nmstate-webhook/0.log" Mar 20 09:04:55.534944 master-0 kubenswrapper[27820]: I0320 09:04:55.534907 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_26923e70-56a5-4020-8b55-510879ec6fd4/installer/0.log" Mar 20 09:04:55.601936 master-0 kubenswrapper[27820]: I0320 09:04:55.601908 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-z9frd_51eafdd1-74d2-4441-96c6-e5edd8705e55/controller/0.log" Mar 20 09:04:55.620815 master-0 kubenswrapper[27820]: I0320 09:04:55.619359 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-z9frd_51eafdd1-74d2-4441-96c6-e5edd8705e55/kube-rbac-proxy/0.log" Mar 20 09:04:55.651890 master-0 kubenswrapper[27820]: I0320 09:04:55.651844 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/controller/0.log" Mar 20 09:04:55.698395 master-0 kubenswrapper[27820]: I0320 09:04:55.698291 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/frr/0.log" Mar 20 09:04:55.717328 master-0 kubenswrapper[27820]: I0320 09:04:55.717245 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/reloader/0.log" Mar 20 09:04:55.723610 master-0 kubenswrapper[27820]: I0320 09:04:55.723173 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/frr-metrics/0.log" Mar 20 09:04:55.744045 master-0 kubenswrapper[27820]: I0320 09:04:55.743988 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/kube-rbac-proxy/0.log" Mar 20 09:04:55.751230 master-0 kubenswrapper[27820]: I0320 09:04:55.751191 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/kube-rbac-proxy-frr/0.log" Mar 20 09:04:55.767326 master-0 kubenswrapper[27820]: I0320 09:04:55.767276 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-frr-files/0.log" Mar 20 09:04:55.779147 master-0 kubenswrapper[27820]: I0320 09:04:55.779103 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-reloader/0.log" Mar 20 09:04:55.796987 master-0 kubenswrapper[27820]: I0320 09:04:55.795611 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-metrics/0.log" Mar 20 09:04:55.810165 master-0 kubenswrapper[27820]: I0320 09:04:55.809428 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-pn2fv_bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8/frr-k8s-webhook-server/0.log" Mar 20 09:04:55.834322 master-0 kubenswrapper[27820]: I0320 09:04:55.833746 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6485fcfd64-dfmlb_df2f3936-c47f-46f1-acb5-0af23bf9bf1c/manager/0.log" Mar 20 09:04:55.844326 master-0 kubenswrapper[27820]: I0320 09:04:55.843361 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6465fb44b7-42g97_21e7b181-5c6b-433e-adf7-ebe2d7b45aa7/webhook-server/0.log" Mar 20 09:04:55.912307 master-0 kubenswrapper[27820]: I0320 09:04:55.911833 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p2fhx_f01da03b-1f5e-4ade-a4f9-e0dac32eb142/speaker/0.log" Mar 20 09:04:55.924325 master-0 kubenswrapper[27820]: I0320 09:04:55.922097 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p2fhx_f01da03b-1f5e-4ade-a4f9-e0dac32eb142/kube-rbac-proxy/0.log" Mar 20 09:04:56.621655 master-0 kubenswrapper[27820]: I0320 09:04:56.621610 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-j6hxl_2a25b643-c08d-462f-80f4-8a4feb1e26e8/assisted-installer-controller/0.log" Mar 20 09:04:57.891143 master-0 kubenswrapper[27820]: I0320 09:04:57.891082 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-98d8fdfc5-dbdbd_34085a98-e268-499c-ab1b-f058add5cbfa/oauth-openshift/0.log" Mar 20 09:04:58.808487 master-0 kubenswrapper[27820]: I0320 09:04:58.808442 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-tdpfq_8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/authentication-operator/1.log" Mar 20 09:04:58.831720 master-0 kubenswrapper[27820]: I0320 09:04:58.831617 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-5885bfd7f4-tdpfq_8f9eaa7f-7c61-4f6e-b3b3-bf4797dfb072/authentication-operator/2.log" Mar 20 09:04:59.559533 master-0 kubenswrapper[27820]: I0320 09:04:59.559416 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-kvmtp_e89571b2-098c-495b-9b53-c4ebd95296ab/router/4.log" Mar 20 09:04:59.571386 master-0 kubenswrapper[27820]: I0320 09:04:59.571343 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-7dcf5569b5-kvmtp_e89571b2-098c-495b-9b53-c4ebd95296ab/router/3.log" Mar 20 09:05:00.214868 master-0 kubenswrapper[27820]: I0320 09:05:00.214808 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5595498c49-hrfrr_6a6a187d-5b25-4d63-939e-c04e07369371/oauth-apiserver/0.log" Mar 20 09:05:00.224954 master-0 kubenswrapper[27820]: I0320 09:05:00.224894 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-5595498c49-hrfrr_6a6a187d-5b25-4d63-939e-c04e07369371/fix-audit-permissions/0.log" Mar 20 09:05:00.237296 master-0 kubenswrapper[27820]: I0320 09:05:00.237231 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mzl48_4bc06990-cfb0-4664-9ae7-f787327d1401/registry-server/0.log" Mar 20 09:05:00.777839 master-0 kubenswrapper[27820]: I0320 09:05:00.777791 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/kube-rbac-proxy/0.log" Mar 20 09:05:00.804685 master-0 kubenswrapper[27820]: I0320 09:05:00.804632 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/0.log" Mar 20 09:05:00.813701 master-0 kubenswrapper[27820]: I0320 09:05:00.813643 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/1.log" Mar 20 09:05:00.874243 master-0 kubenswrapper[27820]: I0320 09:05:00.874192 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/3.log" Mar 20 09:05:00.877901 master-0 kubenswrapper[27820]: I0320 09:05:00.877849 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/2.log" Mar 20 09:05:00.895800 master-0 kubenswrapper[27820]: I0320 09:05:00.895740 27820 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr"] Mar 20 09:05:00.896789 master-0 kubenswrapper[27820]: I0320 09:05:00.896767 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:00.931767 master-0 kubenswrapper[27820]: I0320 09:05:00.931712 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/baremetal-kube-rbac-proxy/0.log" Mar 20 09:05:00.933622 master-0 kubenswrapper[27820]: I0320 09:05:00.933568 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr"] Mar 20 09:05:00.994147 master-0 kubenswrapper[27820]: I0320 09:05:00.993726 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/1.log" Mar 20 09:05:01.003212 master-0 kubenswrapper[27820]: I0320 09:05:01.003157 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/0.log" Mar 20 09:05:01.027361 master-0 kubenswrapper[27820]: I0320 09:05:01.027278 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndvl8\" (UniqueName: \"kubernetes.io/projected/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-kube-api-access-ndvl8\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.027361 master-0 kubenswrapper[27820]: I0320 09:05:01.027372 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-lib-modules\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.027831 master-0 kubenswrapper[27820]: I0320 09:05:01.027401 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-proc\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.027831 master-0 kubenswrapper[27820]: I0320 09:05:01.027450 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-sys\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.027831 master-0 kubenswrapper[27820]: I0320 09:05:01.027489 27820 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-podres\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.031462 master-0 kubenswrapper[27820]: I0320 09:05:01.030902 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/kube-rbac-proxy/0.log" Mar 20 09:05:01.052285 master-0 kubenswrapper[27820]: I0320 09:05:01.052206 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/0.log" Mar 20 09:05:01.053384 master-0 kubenswrapper[27820]: I0320 09:05:01.053359 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/1.log" Mar 20 09:05:01.130168 master-0 kubenswrapper[27820]: I0320 09:05:01.130104 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-lib-modules\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130408 master-0 kubenswrapper[27820]: I0320 09:05:01.130232 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-proc\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130547 master-0 kubenswrapper[27820]: I0320 09:05:01.130255 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-lib-modules\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130588 master-0 kubenswrapper[27820]: I0320 09:05:01.130363 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-proc\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130588 master-0 kubenswrapper[27820]: I0320 09:05:01.130490 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-sys\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130655 master-0 kubenswrapper[27820]: I0320 09:05:01.130637 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-sys\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130708 master-0 kubenswrapper[27820]: I0320 09:05:01.130688 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-podres\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130806 master-0 kubenswrapper[27820]: I0320 09:05:01.130778 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-podres\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.130844 master-0 kubenswrapper[27820]: I0320 09:05:01.130804 27820 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndvl8\" (UniqueName: \"kubernetes.io/projected/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-kube-api-access-ndvl8\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.148010 master-0 kubenswrapper[27820]: I0320 09:05:01.147795 27820 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndvl8\" (UniqueName: \"kubernetes.io/projected/9465dd33-bfd7-49e1-a789-317c5f3e1e4e-kube-api-access-ndvl8\") pod \"perf-node-gather-daemonset-vqbqr\" (UID: \"9465dd33-bfd7-49e1-a789-317c5f3e1e4e\") " pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.215148 master-0 kubenswrapper[27820]: I0320 09:05:01.215072 27820 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:01.654775 master-0 kubenswrapper[27820]: W0320 09:05:01.654722 27820 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod9465dd33_bfd7_49e1_a789_317c5f3e1e4e.slice/crio-ed72eb8649db3f61d95c8b6977168000122b029f519380f62c0546f2e4d1a0be WatchSource:0}: Error finding container ed72eb8649db3f61d95c8b6977168000122b029f519380f62c0546f2e4d1a0be: Status 404 returned error can't find the container with id ed72eb8649db3f61d95c8b6977168000122b029f519380f62c0546f2e4d1a0be Mar 20 09:05:01.666931 master-0 kubenswrapper[27820]: I0320 09:05:01.666287 27820 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr"] Mar 20 09:05:01.672477 master-0 kubenswrapper[27820]: I0320 09:05:01.672402 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" event={"ID":"9465dd33-bfd7-49e1-a789-317c5f3e1e4e","Type":"ContainerStarted","Data":"ed72eb8649db3f61d95c8b6977168000122b029f519380f62c0546f2e4d1a0be"} Mar 20 09:05:01.979610 master-0 kubenswrapper[27820]: I0320 09:05:01.979465 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/cluster-cloud-controller-manager/0.log" Mar 20 09:05:01.979610 master-0 kubenswrapper[27820]: I0320 09:05:01.979599 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/cluster-cloud-controller-manager/1.log" Mar 20 09:05:01.990945 master-0 kubenswrapper[27820]: I0320 09:05:01.990903 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/0.log" Mar 20 09:05:01.991929 master-0 kubenswrapper[27820]: I0320 09:05:01.991902 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/config-sync-controllers/1.log" Mar 20 09:05:02.004028 master-0 kubenswrapper[27820]: I0320 09:05:02.003963 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7dff898856-vk98n_6163bd4b-dc83-4e83-8590-5ac4753bda1c/kube-rbac-proxy/0.log" Mar 20 09:05:02.681448 master-0 kubenswrapper[27820]: I0320 09:05:02.681382 27820 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" event={"ID":"9465dd33-bfd7-49e1-a789-317c5f3e1e4e","Type":"ContainerStarted","Data":"f06fb56156f64c85cd50a3d267aeec92c43e63232021ed39cbd1e11bd502c14c"} Mar 20 09:05:02.681691 master-0 kubenswrapper[27820]: I0320 09:05:02.681517 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:03.204443 master-0 kubenswrapper[27820]: I0320 09:05:03.204392 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-6mrwl_581a8be2-d16c-4fd8-b051-214bd60a2a91/kube-rbac-proxy/0.log" Mar 20 09:05:03.516820 master-0 kubenswrapper[27820]: I0320 09:05:03.516348 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-744f9dbf77-6mrwl_581a8be2-d16c-4fd8-b051-214bd60a2a91/cloud-credential-operator/0.log" Mar 20 09:05:05.100630 master-0 kubenswrapper[27820]: I0320 09:05:05.100568 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-config-operator/2.log" Mar 20 09:05:05.104432 master-0 kubenswrapper[27820]: I0320 09:05:05.104394 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-config-operator/3.log" Mar 20 09:05:05.117933 master-0 kubenswrapper[27820]: I0320 09:05:05.117872 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-95bf4f4d-25jrp_3065e4b4-4493-41ce-b9d2-89315475f74f/openshift-api/0.log" Mar 20 09:05:05.860804 master-0 kubenswrapper[27820]: I0320 09:05:05.860738 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-76b6568d85-rpz95_c1e02d0c-443f-4923-b3dd-a4f3f88d9a05/console-operator/0.log" Mar 20 09:05:06.386774 master-0 kubenswrapper[27820]: I0320 09:05:06.386714 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7bd98dd549-btpxq_e50e09eb-9d18-474c-b9ef-74b91c219d00/console/0.log" Mar 20 09:05:06.407389 master-0 kubenswrapper[27820]: I0320 09:05:06.407331 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-66b8ffb895-8rz5f_66c7dc3c-174d-4f87-8c7b-c5c7b8649fb9/download-server/0.log" Mar 20 09:05:06.452101 master-0 kubenswrapper[27820]: I0320 09:05:06.452046 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/kube-rbac-proxy/0.log" Mar 20 09:05:06.474059 master-0 kubenswrapper[27820]: I0320 09:05:06.474002 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/0.log" Mar 20 09:05:06.485523 master-0 kubenswrapper[27820]: I0320 09:05:06.485473 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-866dc4744-626qm_2d125bc5-08ce-434a-bde7-0ba8fc0169ea/cluster-autoscaler-operator/1.log" Mar 20 09:05:06.499528 master-0 kubenswrapper[27820]: I0320 09:05:06.499463 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/2.log" Mar 20 09:05:06.500783 master-0 kubenswrapper[27820]: I0320 09:05:06.500744 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/cluster-baremetal-operator/3.log" Mar 20 09:05:06.510840 master-0 kubenswrapper[27820]: I0320 09:05:06.510792 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-6f69995874-b25f2_f202273a-b111-46ce-b404-7e481d2c7ff9/baremetal-kube-rbac-proxy/0.log" Mar 20 09:05:06.526386 master-0 kubenswrapper[27820]: I0320 09:05:06.526331 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/1.log" Mar 20 09:05:06.526615 master-0 kubenswrapper[27820]: I0320 09:05:06.526529 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6f97756bc8-tkwh6_a86af6a2-55a9-4c4e-8caf-1f51fedb23f5/control-plane-machine-set-operator/0.log" Mar 20 09:05:06.540678 master-0 kubenswrapper[27820]: I0320 09:05:06.540628 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/kube-rbac-proxy/0.log" Mar 20 09:05:06.548891 master-0 kubenswrapper[27820]: I0320 09:05:06.548844 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/1.log" Mar 20 09:05:06.553293 master-0 kubenswrapper[27820]: I0320 09:05:06.553237 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-6fbb6cf6f9-lr7tb_80ddf0a4-e853-4de0-b540-81144dfdd31d/machine-api-operator/0.log" Mar 20 09:05:07.117642 master-0 kubenswrapper[27820]: I0320 09:05:07.117601 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-848gc_e9c0293a-5340-4ebe-bc8f-43e78ba9f280/cluster-storage-operator/0.log" Mar 20 09:05:07.119574 master-0 kubenswrapper[27820]: I0320 09:05:07.119541 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-7d87854d6-848gc_e9c0293a-5340-4ebe-bc8f-43e78ba9f280/cluster-storage-operator/1.log" Mar 20 09:05:07.137354 master-0 kubenswrapper[27820]: I0320 09:05:07.137306 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/3.log" Mar 20 09:05:07.137581 master-0 kubenswrapper[27820]: I0320 09:05:07.137460 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-64854d9cff-gng67_a2a3df6e-e327-4e97-b8f0-f2d6cdd1e5f9/snapshot-controller/4.log" Mar 20 09:05:07.164116 master-0 kubenswrapper[27820]: I0320 09:05:07.164025 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-b5lg6_acbaba45-12d9-40b9-818c-4b091d7929b1/csi-snapshot-controller-operator/0.log" Mar 20 09:05:07.166316 master-0 kubenswrapper[27820]: I0320 09:05:07.166244 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5f5d689c6b-b5lg6_acbaba45-12d9-40b9-818c-4b091d7929b1/csi-snapshot-controller-operator/1.log" Mar 20 09:05:07.643385 master-0 kubenswrapper[27820]: I0320 09:05:07.643332 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-xfns6_ff2dfe9d-2834-43cb-b093-0831b2b87131/dns-operator/0.log" Mar 20 09:05:07.653410 master-0 kubenswrapper[27820]: I0320 09:05:07.653378 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-9c5679d8f-xfns6_ff2dfe9d-2834-43cb-b093-0831b2b87131/kube-rbac-proxy/0.log" Mar 20 09:05:08.090930 master-0 kubenswrapper[27820]: I0320 09:05:08.090834 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-gskz6_41253bde-5d09-4ff0-8e7c-4a21fe2b7106/dns/0.log" Mar 20 09:05:08.104624 master-0 kubenswrapper[27820]: I0320 09:05:08.104580 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-gskz6_41253bde-5d09-4ff0-8e7c-4a21fe2b7106/kube-rbac-proxy/0.log" Mar 20 09:05:08.120644 master-0 kubenswrapper[27820]: I0320 09:05:08.120576 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-j7ngf_1ae4d0d7-67e6-4e0c-9265-8e48ac2d4cbf/dns-node-resolver/0.log" Mar 20 09:05:08.563582 master-0 kubenswrapper[27820]: I0320 09:05:08.563526 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-7x9vq_fec3170d-3f3e-42f5-b20a-da53721c0dac/etcd-operator/2.log" Mar 20 09:05:08.564537 master-0 kubenswrapper[27820]: I0320 09:05:08.564156 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-8544cbcf9c-7x9vq_fec3170d-3f3e-42f5-b20a-da53721c0dac/etcd-operator/1.log" Mar 20 09:05:09.022936 master-0 kubenswrapper[27820]: I0320 09:05:09.022898 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcdctl/0.log" Mar 20 09:05:09.270874 master-0 kubenswrapper[27820]: I0320 09:05:09.270833 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd/0.log" Mar 20 09:05:09.283465 master-0 kubenswrapper[27820]: I0320 09:05:09.283365 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-metrics/0.log" Mar 20 09:05:09.293006 master-0 kubenswrapper[27820]: I0320 09:05:09.292959 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-readyz/0.log" Mar 20 09:05:09.310464 master-0 kubenswrapper[27820]: I0320 09:05:09.310336 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-rev/0.log" Mar 20 09:05:09.326171 master-0 kubenswrapper[27820]: I0320 09:05:09.326119 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/setup/0.log" Mar 20 09:05:09.340624 master-0 kubenswrapper[27820]: I0320 09:05:09.340580 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-ensure-env-vars/0.log" Mar 20 09:05:09.352295 master-0 kubenswrapper[27820]: I0320 09:05:09.352243 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_094204df314fe45bd5af12ca1b4622bb/etcd-resources-copy/0.log" Mar 20 09:05:09.394307 master-0 kubenswrapper[27820]: I0320 09:05:09.394251 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_169353ee-c927-4483-8976-b9ca08b0a6d1/installer/0.log" Mar 20 09:05:09.429150 master-0 kubenswrapper[27820]: I0320 09:05:09.429086 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_26923e70-56a5-4020-8b55-510879ec6fd4/installer/0.log" Mar 20 09:05:10.026295 master-0 kubenswrapper[27820]: I0320 09:05:10.026226 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-5549dc66cb-cg8qr_57189f7c-5987-457d-a299-0a6b9bcb3e24/cluster-image-registry-operator/0.log" Mar 20 09:05:10.040246 master-0 kubenswrapper[27820]: I0320 09:05:10.040185 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-xzw6l_c35544f3-7959-401e-81c1-05b4f29551d7/node-ca/0.log" Mar 20 09:05:10.564186 master-0 kubenswrapper[27820]: I0320 09:05:10.564122 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/4.log" Mar 20 09:05:10.568885 master-0 kubenswrapper[27820]: I0320 09:05:10.568845 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/ingress-operator/5.log" Mar 20 09:05:10.584426 master-0 kubenswrapper[27820]: I0320 09:05:10.584372 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-66b84d69b-dknxr_22f85e98-eb36-46b2-ab5d-7c21e060cba5/kube-rbac-proxy/0.log" Mar 20 09:05:11.048653 master-0 kubenswrapper[27820]: I0320 09:05:11.048607 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-vzrlt_4c9b8a0c-1ead-49b5-8b05-68e6f25ddefc/serve-healthcheck-canary/0.log" Mar 20 09:05:11.239979 master-0 kubenswrapper[27820]: I0320 09:05:11.239917 27820 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" Mar 20 09:05:11.256607 master-0 kubenswrapper[27820]: I0320 09:05:11.256533 27820 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-jhtnq/perf-node-gather-daemonset-vqbqr" podStartSLOduration=11.256515831 podStartE2EDuration="11.256515831s" podCreationTimestamp="2026-03-20 09:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 09:05:02.708502058 +0000 UTC m=+912.803711222" watchObservedRunningTime="2026-03-20 09:05:11.256515831 +0000 UTC m=+921.351724985" Mar 20 09:05:11.523922 master-0 kubenswrapper[27820]: I0320 09:05:11.523863 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-68bf6ff9d6-c7zf4_6d62448d-55f1-4bdc-85aa-09e7bdf766cc/insights-operator/0.log" Mar 20 09:05:12.010986 master-0 kubenswrapper[27820]: I0320 09:05:12.010937 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-nld4c_184b1066-67c3-4648-b721-ff50069ebd67/cert-manager-controller/0.log" Mar 20 09:05:12.040611 master-0 kubenswrapper[27820]: I0320 09:05:12.040558 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-hjj29_623dd9f9-be57-431d-a5ae-28be094e138f/cert-manager-cainjector/0.log" Mar 20 09:05:12.052938 master-0 kubenswrapper[27820]: I0320 09:05:12.052885 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-sb8xw_4a0dad68-0868-49b6-a825-466de3548a78/cert-manager-webhook/0.log" Mar 20 09:05:12.980160 master-0 kubenswrapper[27820]: I0320 09:05:12.980103 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/alertmanager/0.log" Mar 20 09:05:12.994417 master-0 kubenswrapper[27820]: I0320 09:05:12.994386 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/config-reloader/0.log" Mar 20 09:05:13.009442 master-0 kubenswrapper[27820]: I0320 09:05:13.009402 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/kube-rbac-proxy-web/0.log" Mar 20 09:05:13.023752 master-0 kubenswrapper[27820]: I0320 09:05:13.023699 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/kube-rbac-proxy/0.log" Mar 20 09:05:13.037414 master-0 kubenswrapper[27820]: I0320 09:05:13.037361 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/kube-rbac-proxy-metric/0.log" Mar 20 09:05:13.049737 master-0 kubenswrapper[27820]: I0320 09:05:13.049657 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/prom-label-proxy/0.log" Mar 20 09:05:13.063166 master-0 kubenswrapper[27820]: I0320 09:05:13.063113 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_dc81c5cb-439d-4c3d-8a34-70e9035a846c/init-config-reloader/0.log" Mar 20 09:05:13.099245 master-0 kubenswrapper[27820]: I0320 09:05:13.099187 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-58845fbb57-6vgt6_5707066a-bd66-41bc-8cea-cff1630ab5ee/cluster-monitoring-operator/0.log" Mar 20 09:05:13.117661 master-0 kubenswrapper[27820]: I0320 09:05:13.117611 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-28l2x_44bc88d8-9e01-4521-a704-85d9ca095baa/kube-state-metrics/0.log" Mar 20 09:05:13.132660 master-0 kubenswrapper[27820]: I0320 09:05:13.132622 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-28l2x_44bc88d8-9e01-4521-a704-85d9ca095baa/kube-rbac-proxy-main/0.log" Mar 20 09:05:13.149431 master-0 kubenswrapper[27820]: I0320 09:05:13.149392 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-7bbc969446-28l2x_44bc88d8-9e01-4521-a704-85d9ca095baa/kube-rbac-proxy-self/0.log" Mar 20 09:05:13.169041 master-0 kubenswrapper[27820]: I0320 09:05:13.168994 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-64c7dbd4b9-vtfnn_83bf09c7-7ad7-4f3c-8e55-b154af674183/metrics-server/0.log" Mar 20 09:05:13.185719 master-0 kubenswrapper[27820]: I0320 09:05:13.185669 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-77688d7687-k82zc_124cba9b-2b7c-4e54-9061-a6949d168655/monitoring-plugin/0.log" Mar 20 09:05:13.207573 master-0 kubenswrapper[27820]: I0320 09:05:13.207492 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-rzg98_123f1ecb-cc03-462b-b76f-7251bf69d3d6/node-exporter/0.log" Mar 20 09:05:13.220452 master-0 kubenswrapper[27820]: I0320 09:05:13.220105 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-rzg98_123f1ecb-cc03-462b-b76f-7251bf69d3d6/kube-rbac-proxy/0.log" Mar 20 09:05:13.237003 master-0 kubenswrapper[27820]: I0320 09:05:13.236959 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-rzg98_123f1ecb-cc03-462b-b76f-7251bf69d3d6/init-textfile/0.log" Mar 20 09:05:13.251823 master-0 kubenswrapper[27820]: I0320 09:05:13.251766 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-qclrg_d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/kube-rbac-proxy-main/0.log" Mar 20 09:05:13.265504 master-0 kubenswrapper[27820]: I0320 09:05:13.265464 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-qclrg_d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/kube-rbac-proxy-self/0.log" Mar 20 09:05:13.284951 master-0 kubenswrapper[27820]: I0320 09:05:13.284903 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-5dc6c74576-qclrg_d5b579a3-74b5-4cd2-ae99-5c1d1480c2f0/openshift-state-metrics/0.log" Mar 20 09:05:13.313667 master-0 kubenswrapper[27820]: I0320 09:05:13.313621 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/prometheus/0.log" Mar 20 09:05:13.326750 master-0 kubenswrapper[27820]: I0320 09:05:13.326703 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/config-reloader/0.log" Mar 20 09:05:13.340818 master-0 kubenswrapper[27820]: I0320 09:05:13.340757 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/thanos-sidecar/0.log" Mar 20 09:05:13.353427 master-0 kubenswrapper[27820]: I0320 09:05:13.353390 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/kube-rbac-proxy-web/0.log" Mar 20 09:05:13.370202 master-0 kubenswrapper[27820]: I0320 09:05:13.370167 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/kube-rbac-proxy/0.log" Mar 20 09:05:13.389233 master-0 kubenswrapper[27820]: I0320 09:05:13.389144 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/kube-rbac-proxy-thanos/0.log" Mar 20 09:05:13.405835 master-0 kubenswrapper[27820]: I0320 09:05:13.405783 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_8c22afd8-ac59-47f2-83da-5efa9eea747a/init-config-reloader/0.log" Mar 20 09:05:13.436400 master-0 kubenswrapper[27820]: I0320 09:05:13.435843 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-2lwqr_0ad95adc-2e0f-4e95-94e7-66e6d240a930/prometheus-operator/0.log" Mar 20 09:05:13.448575 master-0 kubenswrapper[27820]: I0320 09:05:13.448467 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-6c8df6d4b-2lwqr_0ad95adc-2e0f-4e95-94e7-66e6d240a930/kube-rbac-proxy/0.log" Mar 20 09:05:13.465999 master-0 kubenswrapper[27820]: I0320 09:05:13.465956 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-69c6b55594-kh8bg_14ef046f-b284-457f-ad7a-b7958cb82dd5/prometheus-operator-admission-webhook/0.log" Mar 20 09:05:13.491516 master-0 kubenswrapper[27820]: I0320 09:05:13.491465 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-69449d79f9-kr2pv_7f15ea03-44e8-40ae-959d-9cca287d76c9/telemeter-client/0.log" Mar 20 09:05:13.506380 master-0 kubenswrapper[27820]: I0320 09:05:13.506344 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-69449d79f9-kr2pv_7f15ea03-44e8-40ae-959d-9cca287d76c9/reload/0.log" Mar 20 09:05:13.519026 master-0 kubenswrapper[27820]: I0320 09:05:13.518992 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-69449d79f9-kr2pv_7f15ea03-44e8-40ae-959d-9cca287d76c9/kube-rbac-proxy/0.log" Mar 20 09:05:13.539440 master-0 kubenswrapper[27820]: I0320 09:05:13.539403 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7b58769b45-q7j7f_06e28acf-6ec7-4e0d-bb87-6577b30f7c35/thanos-query/0.log" Mar 20 09:05:13.550286 master-0 kubenswrapper[27820]: I0320 09:05:13.550218 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7b58769b45-q7j7f_06e28acf-6ec7-4e0d-bb87-6577b30f7c35/kube-rbac-proxy-web/0.log" Mar 20 09:05:13.563056 master-0 kubenswrapper[27820]: I0320 09:05:13.563027 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7b58769b45-q7j7f_06e28acf-6ec7-4e0d-bb87-6577b30f7c35/kube-rbac-proxy/0.log" Mar 20 09:05:13.576253 master-0 kubenswrapper[27820]: I0320 09:05:13.576216 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7b58769b45-q7j7f_06e28acf-6ec7-4e0d-bb87-6577b30f7c35/prom-label-proxy/0.log" Mar 20 09:05:13.587413 master-0 kubenswrapper[27820]: I0320 09:05:13.587373 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7b58769b45-q7j7f_06e28acf-6ec7-4e0d-bb87-6577b30f7c35/kube-rbac-proxy-rules/0.log" Mar 20 09:05:13.608371 master-0 kubenswrapper[27820]: I0320 09:05:13.608320 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-7b58769b45-q7j7f_06e28acf-6ec7-4e0d-bb87-6577b30f7c35/kube-rbac-proxy-metrics/0.log" Mar 20 09:05:14.944383 master-0 kubenswrapper[27820]: I0320 09:05:14.944322 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-z9frd_51eafdd1-74d2-4441-96c6-e5edd8705e55/controller/0.log" Mar 20 09:05:14.956384 master-0 kubenswrapper[27820]: I0320 09:05:14.956335 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-z9frd_51eafdd1-74d2-4441-96c6-e5edd8705e55/kube-rbac-proxy/0.log" Mar 20 09:05:14.978394 master-0 kubenswrapper[27820]: I0320 09:05:14.978330 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/controller/0.log" Mar 20 09:05:15.019033 master-0 kubenswrapper[27820]: I0320 09:05:15.018981 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/frr/0.log" Mar 20 09:05:15.034764 master-0 kubenswrapper[27820]: I0320 09:05:15.034712 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/reloader/0.log" Mar 20 09:05:15.048765 master-0 kubenswrapper[27820]: I0320 09:05:15.048716 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/frr-metrics/0.log" Mar 20 09:05:15.066017 master-0 kubenswrapper[27820]: I0320 09:05:15.064549 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/kube-rbac-proxy/0.log" Mar 20 09:05:15.084114 master-0 kubenswrapper[27820]: I0320 09:05:15.084069 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/kube-rbac-proxy-frr/0.log" Mar 20 09:05:15.098551 master-0 kubenswrapper[27820]: I0320 09:05:15.098509 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-frr-files/0.log" Mar 20 09:05:15.110539 master-0 kubenswrapper[27820]: I0320 09:05:15.110497 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-reloader/0.log" Mar 20 09:05:15.121130 master-0 kubenswrapper[27820]: I0320 09:05:15.121091 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-tl7kr_2854e640-52d9-4a65-8b6c-4bc273a80668/cp-metrics/0.log" Mar 20 09:05:15.136434 master-0 kubenswrapper[27820]: I0320 09:05:15.136380 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-pn2fv_bb8fd2f6-e697-42fa-8e7d-f5737a39f6e8/frr-k8s-webhook-server/0.log" Mar 20 09:05:15.171772 master-0 kubenswrapper[27820]: I0320 09:05:15.171730 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6485fcfd64-dfmlb_df2f3936-c47f-46f1-acb5-0af23bf9bf1c/manager/0.log" Mar 20 09:05:15.189632 master-0 kubenswrapper[27820]: I0320 09:05:15.189589 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6465fb44b7-42g97_21e7b181-5c6b-433e-adf7-ebe2d7b45aa7/webhook-server/0.log" Mar 20 09:05:15.281922 master-0 kubenswrapper[27820]: I0320 09:05:15.281585 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p2fhx_f01da03b-1f5e-4ade-a4f9-e0dac32eb142/speaker/0.log" Mar 20 09:05:15.293931 master-0 kubenswrapper[27820]: I0320 09:05:15.293867 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-p2fhx_f01da03b-1f5e-4ade-a4f9-e0dac32eb142/kube-rbac-proxy/0.log" Mar 20 09:05:16.417541 master-0 kubenswrapper[27820]: I0320 09:05:16.417440 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-598fbc5f8f-zxgdk_6d26f719-43b9-4c1c-9a54-ff800177db68/cluster-node-tuning-operator/0.log" Mar 20 09:05:16.453018 master-0 kubenswrapper[27820]: I0320 09:05:16.452932 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-zgm52_97ad1db7-0bf9-4faf-9fa5-0f3df7dab777/tuned/0.log" Mar 20 09:05:17.177553 master-0 kubenswrapper[27820]: I0320 09:05:17.177491 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-wptls_dd70ba1c-6a56-40ba-bdbc-25d0479b56c8/prometheus-operator/0.log" Mar 20 09:05:17.199242 master-0 kubenswrapper[27820]: I0320 09:05:17.199170 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t_7ff38664-87a9-4803-aae6-6c3f31a68cb4/prometheus-operator-admission-webhook/0.log" Mar 20 09:05:17.217516 master-0 kubenswrapper[27820]: I0320 09:05:17.217463 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d_744c7bbe-2db8-4667-8e23-aaf4bee66a24/prometheus-operator-admission-webhook/0.log" Mar 20 09:05:17.238023 master-0 kubenswrapper[27820]: I0320 09:05:17.237971 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-q9drt_956c697c-5335-4400-890b-bb8d2a9756d5/operator/0.log" Mar 20 09:05:17.255998 master-0 kubenswrapper[27820]: I0320 09:05:17.255954 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-9d56b9f9d-lplkg_5787e9b7-491a-4825-a336-949d4dca2dca/perses-operator/0.log" Mar 20 09:05:17.465864 master-0 kubenswrapper[27820]: I0320 09:05:17.465717 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-frpgr_ef2cb375-2652-47d1-bf48-a5411ff51a2c/nmstate-console-plugin/0.log" Mar 20 09:05:17.487536 master-0 kubenswrapper[27820]: I0320 09:05:17.487496 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7c6kf_23a22e55-9f4f-4f31-81c5-328720dee978/nmstate-handler/0.log" Mar 20 09:05:17.499011 master-0 kubenswrapper[27820]: I0320 09:05:17.498971 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-d25js_4ef5015e-1e99-4f9e-ba7c-59b462ff2188/nmstate-metrics/0.log" Mar 20 09:05:17.507749 master-0 kubenswrapper[27820]: I0320 09:05:17.507706 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-d25js_4ef5015e-1e99-4f9e-ba7c-59b462ff2188/kube-rbac-proxy/0.log" Mar 20 09:05:17.520653 master-0 kubenswrapper[27820]: I0320 09:05:17.520618 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-wv4sj_1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f/nmstate-operator/0.log" Mar 20 09:05:17.555163 master-0 kubenswrapper[27820]: I0320 09:05:17.555114 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-mvkrl_58d0dca7-7d2f-4601-95a1-377c982d2d41/nmstate-webhook/0.log" Mar 20 09:05:18.550862 master-0 kubenswrapper[27820]: I0320 09:05:18.550749 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-xwkzx_2faf85a2-29bb-4275-a12b-0ef1663a4f0d/kube-apiserver-operator/1.log" Mar 20 09:05:18.558397 master-0 kubenswrapper[27820]: I0320 09:05:18.558354 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-8b68b9d9b-xwkzx_2faf85a2-29bb-4275-a12b-0ef1663a4f0d/kube-apiserver-operator/2.log" Mar 20 09:05:19.220802 master-0 kubenswrapper[27820]: I0320 09:05:19.220743 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_cce21ae1-63de-49be-a027-084a101e650b/installer/0.log" Mar 20 09:05:19.238103 master-0 kubenswrapper[27820]: I0320 09:05:19.238062 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_5cdd5ac8-4c2e-4680-b697-0e5d94136fe4/installer/0.log" Mar 20 09:05:19.256874 master-0 kubenswrapper[27820]: I0320 09:05:19.256832 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_9775cc27-53b9-4d21-a98b-84b39ada32ee/installer/0.log" Mar 20 09:05:19.278604 master-0 kubenswrapper[27820]: I0320 09:05:19.278556 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_78ae02f0-5d31-4fda-a63a-534f60df5d1f/installer/0.log" Mar 20 09:05:19.457990 master-0 kubenswrapper[27820]: I0320 09:05:19.457942 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver/0.log" Mar 20 09:05:19.467735 master-0 kubenswrapper[27820]: I0320 09:05:19.467698 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-syncer/0.log" Mar 20 09:05:19.481729 master-0 kubenswrapper[27820]: I0320 09:05:19.481650 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-cert-regeneration-controller/0.log" Mar 20 09:05:19.492684 master-0 kubenswrapper[27820]: I0320 09:05:19.492646 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-insecure-readyz/0.log" Mar 20 09:05:19.507593 master-0 kubenswrapper[27820]: I0320 09:05:19.507552 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/kube-apiserver-check-endpoints/0.log" Mar 20 09:05:19.517842 master-0 kubenswrapper[27820]: I0320 09:05:19.517797 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_d5f502b117c7c8479f7f20848a50fec0/setup/0.log" Mar 20 09:05:20.166584 master-0 kubenswrapper[27820]: I0320 09:05:20.166543 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/kube-rbac-proxy/0.log" Mar 20 09:05:20.182597 master-0 kubenswrapper[27820]: I0320 09:05:20.182523 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/1.log" Mar 20 09:05:20.182597 master-0 kubenswrapper[27820]: I0320 09:05:20.182538 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-6864dc98f7-tf2gj_08d9196b-b68f-421b-8754-bfbaa4020a97/manager/2.log" Mar 20 09:05:20.671996 master-0 kubenswrapper[27820]: I0320 09:05:20.671949 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-nld4c_184b1066-67c3-4648-b721-ff50069ebd67/cert-manager-controller/0.log" Mar 20 09:05:20.693062 master-0 kubenswrapper[27820]: I0320 09:05:20.693017 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-hjj29_623dd9f9-be57-431d-a5ae-28be094e138f/cert-manager-cainjector/0.log" Mar 20 09:05:20.714825 master-0 kubenswrapper[27820]: I0320 09:05:20.714760 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-sb8xw_4a0dad68-0868-49b6-a825-466de3548a78/cert-manager-webhook/0.log" Mar 20 09:05:21.162331 master-0 kubenswrapper[27820]: I0320 09:05:21.162243 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-frpgr_ef2cb375-2652-47d1-bf48-a5411ff51a2c/nmstate-console-plugin/0.log" Mar 20 09:05:21.183857 master-0 kubenswrapper[27820]: I0320 09:05:21.183807 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7c6kf_23a22e55-9f4f-4f31-81c5-328720dee978/nmstate-handler/0.log" Mar 20 09:05:21.202874 master-0 kubenswrapper[27820]: I0320 09:05:21.202829 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-d25js_4ef5015e-1e99-4f9e-ba7c-59b462ff2188/nmstate-metrics/0.log" Mar 20 09:05:21.214673 master-0 kubenswrapper[27820]: I0320 09:05:21.214627 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-d25js_4ef5015e-1e99-4f9e-ba7c-59b462ff2188/kube-rbac-proxy/0.log" Mar 20 09:05:21.233391 master-0 kubenswrapper[27820]: I0320 09:05:21.233344 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-wv4sj_1f68b1a3-e1e0-47e5-baa6-14c6b8e34e3f/nmstate-operator/0.log" Mar 20 09:05:21.247346 master-0 kubenswrapper[27820]: I0320 09:05:21.247288 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-mvkrl_58d0dca7-7d2f-4601-95a1-377c982d2d41/nmstate-webhook/0.log" Mar 20 09:05:21.783125 master-0 kubenswrapper[27820]: I0320 09:05:21.783066 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/kube-multus-additional-cni-plugins/0.log" Mar 20 09:05:21.794496 master-0 kubenswrapper[27820]: I0320 09:05:21.794454 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/egress-router-binary-copy/0.log" Mar 20 09:05:21.814095 master-0 kubenswrapper[27820]: I0320 09:05:21.814044 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/cni-plugins/0.log" Mar 20 09:05:21.827822 master-0 kubenswrapper[27820]: I0320 09:05:21.827389 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/bond-cni-plugin/0.log" Mar 20 09:05:21.838578 master-0 kubenswrapper[27820]: I0320 09:05:21.838533 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/routeoverride-cni/0.log" Mar 20 09:05:21.852052 master-0 kubenswrapper[27820]: I0320 09:05:21.852016 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/whereabouts-cni-bincopy/0.log" Mar 20 09:05:21.863312 master-0 kubenswrapper[27820]: I0320 09:05:21.863245 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-x7vrg_22ff82cf-0d7d-4955-9b7c-97757acbc021/whereabouts-cni/0.log" Mar 20 09:05:21.878695 master-0 kubenswrapper[27820]: I0320 09:05:21.878646 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-kr9hd_a88b1c81-02b5-4c85-9660-5f84c900a946/multus-admission-controller/0.log" Mar 20 09:05:21.889635 master-0 kubenswrapper[27820]: I0320 09:05:21.889579 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-58c9f8fc64-kr9hd_a88b1c81-02b5-4c85-9660-5f84c900a946/kube-rbac-proxy/0.log" Mar 20 09:05:21.960332 master-0 kubenswrapper[27820]: I0320 09:05:21.960292 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-pxqwj_7949621e-4da6-4e43-a1f3-2ef303bf6aa6/kube-multus/0.log" Mar 20 09:05:21.981926 master-0 kubenswrapper[27820]: I0320 09:05:21.981872 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-nfrth_00350ac7-b40a-4459-b94c-a37d7b613645/network-metrics-daemon/0.log" Mar 20 09:05:22.038940 master-0 kubenswrapper[27820]: I0320 09:05:22.038831 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-nfrth_00350ac7-b40a-4459-b94c-a37d7b613645/kube-rbac-proxy/0.log" Mar 20 09:05:22.585435 master-0 kubenswrapper[27820]: I0320 09:05:22.585382 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-7d8cc545d-7wshw_82dee58e-70ab-4181-a0de-fc61333727d9/manager/0.log" Mar 20 09:05:22.605041 master-0 kubenswrapper[27820]: I0320 09:05:22.604976 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-vfwrb_248caf38-da29-4afe-b566-cb5b9d718797/vg-manager/1.log" Mar 20 09:05:22.606657 master-0 kubenswrapper[27820]: I0320 09:05:22.606614 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-vfwrb_248caf38-da29-4afe-b566-cb5b9d718797/vg-manager/0.log" Mar 20 09:05:23.514787 master-0 kubenswrapper[27820]: I0320 09:05:23.514703 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_3ea52b89-46f9-4685-aecd-162ba92baaf5/installer/0.log" Mar 20 09:05:24.973495 master-0 kubenswrapper[27820]: I0320 09:05:24.973441 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-wptls_dd70ba1c-6a56-40ba-bdbc-25d0479b56c8/prometheus-operator/0.log" Mar 20 09:05:24.978014 master-0 kubenswrapper[27820]: I0320 09:05:24.977970 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_fae0c983-2cb4-4749-97ff-a718a9fb6563/installer/0.log" Mar 20 09:05:24.987100 master-0 kubenswrapper[27820]: I0320 09:05:24.987045 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bbbbf679-lxm5t_7ff38664-87a9-4803-aae6-6c3f31a68cb4/prometheus-operator-admission-webhook/0.log" Mar 20 09:05:25.002935 master-0 kubenswrapper[27820]: I0320 09:05:25.001246 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-bbbbf679-q9r4d_744c7bbe-2db8-4667-8e23-aaf4bee66a24/prometheus-operator-admission-webhook/0.log" Mar 20 09:05:25.005144 master-0 kubenswrapper[27820]: I0320 09:05:25.004887 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-retry-1-master-0_75cef5aa-93e6-4b8b-9ab1-06809e85883a/installer/0.log" Mar 20 09:05:25.043365 master-0 kubenswrapper[27820]: I0320 09:05:25.042763 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-q9drt_956c697c-5335-4400-890b-bb8d2a9756d5/operator/0.log" Mar 20 09:05:25.063511 master-0 kubenswrapper[27820]: I0320 09:05:25.063460 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-9d56b9f9d-lplkg_5787e9b7-491a-4825-a336-949d4dca2dca/perses-operator/0.log" Mar 20 09:05:25.070091 master-0 kubenswrapper[27820]: I0320 09:05:25.070050 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_e7c15c64-0760-4f92-93f4-294b46732974/installer/0.log" Mar 20 09:05:25.234433 master-0 kubenswrapper[27820]: I0320 09:05:25.234318 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_81441e014342eafdc07cc934660f5a5b/kube-controller-manager/0.log" Mar 20 09:05:25.281791 master-0 kubenswrapper[27820]: I0320 09:05:25.281749 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_81441e014342eafdc07cc934660f5a5b/cluster-policy-controller/0.log" Mar 20 09:05:25.292902 master-0 kubenswrapper[27820]: I0320 09:05:25.292854 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_81441e014342eafdc07cc934660f5a5b/kube-controller-manager-cert-syncer/0.log" Mar 20 09:05:25.317123 master-0 kubenswrapper[27820]: I0320 09:05:25.316947 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_81441e014342eafdc07cc934660f5a5b/kube-controller-manager-recovery-controller/0.log" Mar 20 09:05:25.985546 master-0 kubenswrapper[27820]: I0320 09:05:25.985418 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-wbfrm_71ca96e8-5108-455c-bb3c-17977d38e912/kube-controller-manager-operator/1.log" Mar 20 09:05:26.013352 master-0 kubenswrapper[27820]: I0320 09:05:26.013248 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-ff989d6cc-wbfrm_71ca96e8-5108-455c-bb3c-17977d38e912/kube-controller-manager-operator/2.log" Mar 20 09:05:27.148407 master-0 kubenswrapper[27820]: I0320 09:05:27.148359 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_84b1b51a-cbfa-42de-9fb8-315e9cb76b58/installer/0.log" Mar 20 09:05:27.165436 master-0 kubenswrapper[27820]: I0320 09:05:27.165387 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_92600726-933f-41eb-a329-1fcc68dc95c1/installer/0.log" Mar 20 09:05:27.208392 master-0 kubenswrapper[27820]: I0320 09:05:27.208341 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-retry-1-master-0_521086da-d513-4475-8db5-098ab9838df1/installer/0.log" Mar 20 09:05:27.235755 master-0 kubenswrapper[27820]: I0320 09:05:27.235707 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler/0.log" Mar 20 09:05:27.247137 master-0 kubenswrapper[27820]: I0320 09:05:27.247069 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-cert-syncer/0.log" Mar 20 09:05:27.259477 master-0 kubenswrapper[27820]: I0320 09:05:27.259410 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/kube-scheduler-recovery-controller/0.log" Mar 20 09:05:27.269497 master-0 kubenswrapper[27820]: I0320 09:05:27.269454 27820 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_8413125cf444e5c95f023c5dd9c6151e/wait-for-host-port/0.log"